Jan 22 12:49:11 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 22 12:49:11 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 22 12:49:11 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 12:49:11 localhost kernel: BIOS-provided physical RAM map:
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 22 12:49:11 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 22 12:49:11 localhost kernel: NX (Execute Disable) protection: active
Jan 22 12:49:11 localhost kernel: APIC: Static calls initialized
Jan 22 12:49:11 localhost kernel: SMBIOS 2.8 present.
Jan 22 12:49:11 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 22 12:49:11 localhost kernel: Hypervisor detected: KVM
Jan 22 12:49:11 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 22 12:49:11 localhost kernel: kvm-clock: using sched offset of 3328585702 cycles
Jan 22 12:49:11 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 22 12:49:11 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 22 12:49:11 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 22 12:49:11 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 22 12:49:11 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 22 12:49:11 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 22 12:49:11 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 22 12:49:11 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 22 12:49:11 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 22 12:49:11 localhost kernel: Using GB pages for direct mapping
Jan 22 12:49:11 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 22 12:49:11 localhost kernel: ACPI: Early table checksum verification disabled
Jan 22 12:49:11 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 22 12:49:11 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 22 12:49:11 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 12:49:11 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 22 12:49:11 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 22 12:49:11 localhost kernel: No NUMA configuration found
Jan 22 12:49:11 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 22 12:49:11 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 22 12:49:11 localhost kernel: Zone ranges:
Jan 22 12:49:11 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 22 12:49:11 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 22 12:49:11 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel:   Device   empty
Jan 22 12:49:11 localhost kernel: Movable zone start for each node
Jan 22 12:49:11 localhost kernel: Early memory node ranges
Jan 22 12:49:11 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 22 12:49:11 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 22 12:49:11 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 22 12:49:11 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 22 12:49:11 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 22 12:49:11 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 22 12:49:11 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 22 12:49:11 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 22 12:49:11 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 22 12:49:11 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 22 12:49:11 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 22 12:49:11 localhost kernel: TSC deadline timer available
Jan 22 12:49:11 localhost kernel: CPU topo: Max. logical packages:   8
Jan 22 12:49:11 localhost kernel: CPU topo: Max. logical dies:       8
Jan 22 12:49:11 localhost kernel: CPU topo: Max. dies per package:   1
Jan 22 12:49:11 localhost kernel: CPU topo: Max. threads per core:   1
Jan 22 12:49:11 localhost kernel: CPU topo: Num. cores per package:     1
Jan 22 12:49:11 localhost kernel: CPU topo: Num. threads per package:   1
Jan 22 12:49:11 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 22 12:49:11 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 22 12:49:11 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 22 12:49:11 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 22 12:49:11 localhost kernel: Booting paravirtualized kernel on KVM
Jan 22 12:49:11 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 22 12:49:11 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 22 12:49:11 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 22 12:49:11 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 22 12:49:11 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 22 12:49:11 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 22 12:49:11 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 12:49:11 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 22 12:49:11 localhost kernel: random: crng init done
Jan 22 12:49:11 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 22 12:49:11 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 22 12:49:11 localhost kernel: Fallback order for Node 0: 0 
Jan 22 12:49:11 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 22 12:49:11 localhost kernel: Policy zone: Normal
Jan 22 12:49:11 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 22 12:49:11 localhost kernel: software IO TLB: area num 8.
Jan 22 12:49:11 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 12:49:11 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 22 12:49:11 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 22 12:49:11 localhost kernel: Dynamic Preempt: voluntary
Jan 22 12:49:11 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 22 12:49:11 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 22 12:49:11 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 22 12:49:11 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 22 12:49:11 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 22 12:49:11 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 22 12:49:11 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 22 12:49:11 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 22 12:49:11 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 12:49:11 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 12:49:11 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 12:49:11 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 22 12:49:11 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 22 12:49:11 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 22 12:49:11 localhost kernel: Console: colour VGA+ 80x25
Jan 22 12:49:11 localhost kernel: printk: console [ttyS0] enabled
Jan 22 12:49:11 localhost kernel: ACPI: Core revision 20230331
Jan 22 12:49:11 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 22 12:49:11 localhost kernel: x2apic enabled
Jan 22 12:49:11 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 22 12:49:11 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 22 12:49:11 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 22 12:49:11 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 22 12:49:11 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 22 12:49:11 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 22 12:49:11 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 22 12:49:11 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 22 12:49:11 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 22 12:49:11 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 22 12:49:11 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 22 12:49:11 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 22 12:49:11 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 22 12:49:11 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 22 12:49:11 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 22 12:49:11 localhost kernel: x86/bugs: return thunk changed
Jan 22 12:49:11 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 22 12:49:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 22 12:49:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 22 12:49:11 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 22 12:49:11 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 22 12:49:11 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 22 12:49:11 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 22 12:49:11 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 22 12:49:11 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 22 12:49:11 localhost kernel: landlock: Up and running.
Jan 22 12:49:11 localhost kernel: Yama: becoming mindful.
Jan 22 12:49:11 localhost kernel: SELinux:  Initializing.
Jan 22 12:49:11 localhost kernel: LSM support for eBPF active
Jan 22 12:49:11 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 22 12:49:11 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 22 12:49:11 localhost kernel: ... version:                0
Jan 22 12:49:11 localhost kernel: ... bit width:              48
Jan 22 12:49:11 localhost kernel: ... generic registers:      6
Jan 22 12:49:11 localhost kernel: ... value mask:             0000ffffffffffff
Jan 22 12:49:11 localhost kernel: ... max period:             00007fffffffffff
Jan 22 12:49:11 localhost kernel: ... fixed-purpose events:   0
Jan 22 12:49:11 localhost kernel: ... event mask:             000000000000003f
Jan 22 12:49:11 localhost kernel: signal: max sigframe size: 1776
Jan 22 12:49:11 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 22 12:49:11 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 22 12:49:11 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 22 12:49:11 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 22 12:49:11 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 22 12:49:11 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 22 12:49:11 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 22 12:49:11 localhost kernel: node 0 deferred pages initialised in 12ms
Jan 22 12:49:11 localhost kernel: Memory: 7763860K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 22 12:49:11 localhost kernel: devtmpfs: initialized
Jan 22 12:49:11 localhost kernel: x86/mm: Memory block size: 128MB
Jan 22 12:49:11 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 22 12:49:11 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 22 12:49:11 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 22 12:49:11 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 22 12:49:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 22 12:49:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 22 12:49:11 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 22 12:49:11 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 22 12:49:11 localhost kernel: audit: type=2000 audit(1769086149.811:1): state=initialized audit_enabled=0 res=1
Jan 22 12:49:11 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 22 12:49:11 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 22 12:49:11 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 22 12:49:11 localhost kernel: cpuidle: using governor menu
Jan 22 12:49:11 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 12:49:11 localhost kernel: PCI: Using configuration type 1 for base access
Jan 22 12:49:11 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 22 12:49:11 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 22 12:49:11 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 22 12:49:11 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 22 12:49:11 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 22 12:49:11 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 22 12:49:11 localhost kernel: Demotion targets for Node 0: null
Jan 22 12:49:11 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 22 12:49:11 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 22 12:49:11 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 22 12:49:11 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 12:49:11 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 22 12:49:11 localhost kernel: ACPI: Interpreter enabled
Jan 22 12:49:11 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 22 12:49:11 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 22 12:49:11 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 12:49:11 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 22 12:49:11 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 22 12:49:11 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 12:49:11 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [3] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [4] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [5] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [6] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [7] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [8] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [9] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [10] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [11] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [12] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [13] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [14] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [15] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [16] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [17] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [18] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [19] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [20] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [21] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [22] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [23] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [24] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [25] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [26] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [27] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [28] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [29] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [30] registered
Jan 22 12:49:11 localhost kernel: acpiphp: Slot [31] registered
Jan 22 12:49:11 localhost kernel: PCI host bridge to bus 0000:00
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 22 12:49:11 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 12:49:11 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 22 12:49:11 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 22 12:49:11 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 22 12:49:11 localhost kernel: iommu: Default domain type: Translated
Jan 22 12:49:11 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 22 12:49:11 localhost kernel: SCSI subsystem initialized
Jan 22 12:49:11 localhost kernel: ACPI: bus type USB registered
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver usbfs
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver hub
Jan 22 12:49:11 localhost kernel: usbcore: registered new device driver usb
Jan 22 12:49:11 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 22 12:49:11 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 22 12:49:11 localhost kernel: PTP clock support registered
Jan 22 12:49:11 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 22 12:49:11 localhost kernel: NetLabel: Initializing
Jan 22 12:49:11 localhost kernel: NetLabel:  domain hash size = 128
Jan 22 12:49:11 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 22 12:49:11 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 22 12:49:11 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 22 12:49:11 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 22 12:49:11 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 22 12:49:11 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 22 12:49:11 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 22 12:49:11 localhost kernel: vgaarb: loaded
Jan 22 12:49:11 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 22 12:49:11 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 22 12:49:11 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 12:49:11 localhost kernel: pnp: PnP ACPI init
Jan 22 12:49:11 localhost kernel: pnp 00:03: [dma 2]
Jan 22 12:49:11 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 22 12:49:11 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 22 12:49:11 localhost kernel: NET: Registered PF_INET protocol family
Jan 22 12:49:11 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 22 12:49:11 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 22 12:49:11 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 22 12:49:11 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 22 12:49:11 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 22 12:49:11 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 12:49:11 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 22 12:49:11 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 12:49:11 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 22 12:49:11 localhost kernel: NET: Registered PF_XDP protocol family
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 22 12:49:11 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 22 12:49:11 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 22 12:49:11 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 113178 usecs
Jan 22 12:49:11 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 22 12:49:11 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 12:49:11 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 22 12:49:11 localhost kernel: ACPI: bus type thunderbolt registered
Jan 22 12:49:11 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 22 12:49:11 localhost kernel: Initialise system trusted keyrings
Jan 22 12:49:11 localhost kernel: Key type blacklist registered
Jan 22 12:49:11 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 22 12:49:11 localhost kernel: zbud: loaded
Jan 22 12:49:11 localhost kernel: integrity: Platform Keyring initialized
Jan 22 12:49:11 localhost kernel: integrity: Machine keyring initialized
Jan 22 12:49:11 localhost kernel: Freeing initrd memory: 87956K
Jan 22 12:49:11 localhost kernel: NET: Registered PF_ALG protocol family
Jan 22 12:49:11 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 22 12:49:11 localhost kernel: Key type asymmetric registered
Jan 22 12:49:11 localhost kernel: Asymmetric key parser 'x509' registered
Jan 22 12:49:11 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 22 12:49:11 localhost kernel: io scheduler mq-deadline registered
Jan 22 12:49:11 localhost kernel: io scheduler kyber registered
Jan 22 12:49:11 localhost kernel: io scheduler bfq registered
Jan 22 12:49:11 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 22 12:49:11 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 12:49:11 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 22 12:49:11 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 22 12:49:11 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 22 12:49:11 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 12:49:11 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 12:49:11 localhost kernel: Non-volatile memory driver v1.3
Jan 22 12:49:11 localhost kernel: rdac: device handler registered
Jan 22 12:49:11 localhost kernel: hp_sw: device handler registered
Jan 22 12:49:11 localhost kernel: emc: device handler registered
Jan 22 12:49:11 localhost kernel: alua: device handler registered
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 22 12:49:11 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 22 12:49:11 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 22 12:49:11 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 12:49:11 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 22 12:49:11 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 22 12:49:11 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 22 12:49:11 localhost kernel: hub 1-0:1.0: USB hub found
Jan 22 12:49:11 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 22 12:49:11 localhost kernel: usbserial: USB Serial support registered for generic
Jan 22 12:49:11 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 22 12:49:11 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 12:49:11 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 22 12:49:11 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 22 12:49:11 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T12:49:10 UTC (1769086150)
Jan 22 12:49:11 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 22 12:49:11 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 22 12:49:11 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 22 12:49:11 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 22 12:49:11 localhost kernel: usbcore: registered new interface driver usbhid
Jan 22 12:49:11 localhost kernel: usbhid: USB HID core driver
Jan 22 12:49:11 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 22 12:49:11 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 22 12:49:11 localhost kernel: Initializing XFRM netlink socket
Jan 22 12:49:11 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 22 12:49:11 localhost kernel: Segment Routing with IPv6
Jan 22 12:49:11 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 22 12:49:11 localhost kernel: mpls_gso: MPLS GSO support
Jan 22 12:49:11 localhost kernel: IPI shorthand broadcast: enabled
Jan 22 12:49:11 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 22 12:49:11 localhost kernel: AES CTR mode by8 optimization enabled
Jan 22 12:49:11 localhost kernel: sched_clock: Marking stable (1305001570, 145978590)->(1582140909, -131160749)
Jan 22 12:49:11 localhost kernel: registered taskstats version 1
Jan 22 12:49:11 localhost kernel: Loading compiled-in X.509 certificates
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 22 12:49:11 localhost kernel: Demotion targets for Node 0: null
Jan 22 12:49:11 localhost kernel: page_owner is disabled
Jan 22 12:49:11 localhost kernel: Key type .fscrypt registered
Jan 22 12:49:11 localhost kernel: Key type fscrypt-provisioning registered
Jan 22 12:49:11 localhost kernel: Key type big_key registered
Jan 22 12:49:11 localhost kernel: Key type encrypted registered
Jan 22 12:49:11 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 22 12:49:11 localhost kernel: Loading compiled-in module X.509 certificates
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 12:49:11 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 22 12:49:11 localhost kernel: ima: No architecture policies found
Jan 22 12:49:11 localhost kernel: evm: Initialising EVM extended attributes:
Jan 22 12:49:11 localhost kernel: evm: security.selinux
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64 (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.apparmor (disabled)
Jan 22 12:49:11 localhost kernel: evm: security.ima
Jan 22 12:49:11 localhost kernel: evm: security.capability
Jan 22 12:49:11 localhost kernel: evm: HMAC attrs: 0x1
Jan 22 12:49:11 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 22 12:49:11 localhost kernel: Running certificate verification RSA selftest
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 22 12:49:11 localhost kernel: Running certificate verification ECDSA selftest
Jan 22 12:49:11 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 22 12:49:11 localhost kernel: clk: Disabling unused clocks
Jan 22 12:49:11 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 22 12:49:11 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 22 12:49:11 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 22 12:49:11 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 22 12:49:11 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 22 12:49:11 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 22 12:49:11 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 22 12:49:11 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 22 12:49:11 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 22 12:49:11 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 22 12:49:11 localhost kernel: Run /init as init process
Jan 22 12:49:11 localhost kernel:   with arguments:
Jan 22 12:49:11 localhost kernel:     /init
Jan 22 12:49:11 localhost kernel:   with environment:
Jan 22 12:49:11 localhost kernel:     HOME=/
Jan 22 12:49:11 localhost kernel:     TERM=linux
Jan 22 12:49:11 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 22 12:49:11 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 22 12:49:11 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 22 12:49:11 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 12:49:11 localhost systemd[1]: Detected virtualization kvm.
Jan 22 12:49:11 localhost systemd[1]: Detected architecture x86-64.
Jan 22 12:49:11 localhost systemd[1]: Running in initrd.
Jan 22 12:49:11 localhost systemd[1]: No hostname configured, using default hostname.
Jan 22 12:49:11 localhost systemd[1]: Hostname set to <localhost>.
Jan 22 12:49:11 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 22 12:49:11 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 22 12:49:11 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 12:49:11 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 22 12:49:11 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 22 12:49:11 localhost systemd[1]: Reached target Local File Systems.
Jan 22 12:49:11 localhost systemd[1]: Reached target Path Units.
Jan 22 12:49:11 localhost systemd[1]: Reached target Slice Units.
Jan 22 12:49:11 localhost systemd[1]: Reached target Swaps.
Jan 22 12:49:11 localhost systemd[1]: Reached target Timer Units.
Jan 22 12:49:11 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 12:49:11 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 22 12:49:11 localhost systemd[1]: Listening on Journal Socket.
Jan 22 12:49:11 localhost systemd[1]: Listening on udev Control Socket.
Jan 22 12:49:11 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 22 12:49:11 localhost systemd[1]: Reached target Socket Units.
Jan 22 12:49:11 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 22 12:49:11 localhost systemd[1]: Starting Journal Service...
Jan 22 12:49:11 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 12:49:11 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 22 12:49:11 localhost systemd[1]: Starting Create System Users...
Jan 22 12:49:11 localhost systemd[1]: Starting Setup Virtual Console...
Jan 22 12:49:11 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 22 12:49:11 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 22 12:49:11 localhost systemd[1]: Finished Create System Users.
Jan 22 12:49:11 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 12:49:11 localhost systemd-journald[307]: Journal started
Jan 22 12:49:11 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/5492a354d1924c48860299be1884b049) is 8.0M, max 153.6M, 145.6M free.
Jan 22 12:49:11 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 22 12:49:11 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 22 12:49:11 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 22 12:49:11 localhost systemd[1]: Started Journal Service.
Jan 22 12:49:11 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 12:49:11 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 12:49:11 localhost systemd[1]: Finished Setup Virtual Console.
Jan 22 12:49:11 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 22 12:49:11 localhost systemd[1]: Starting dracut cmdline hook...
Jan 22 12:49:11 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 22 12:49:11 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 12:49:11 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 12:49:11 localhost systemd[1]: Finished dracut cmdline hook.
Jan 22 12:49:11 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 22 12:49:11 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 22 12:49:11 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 22 12:49:11 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 22 12:49:11 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered udp transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered tcp transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 22 12:49:11 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 12:49:12 localhost rpc.statd[445]: Version 2.5.4 starting
Jan 22 12:49:12 localhost rpc.statd[445]: Initializing NSM state
Jan 22 12:49:12 localhost rpc.idmapd[450]: Setting log level to 0
Jan 22 12:49:12 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 22 12:49:12 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 12:49:12 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 12:49:12 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 12:49:12 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 22 12:49:12 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 22 12:49:12 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 22 12:49:12 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 22 12:49:12 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 22 12:49:12 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 22 12:49:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 12:49:12 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 22 12:49:12 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 22 12:49:12 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 22 12:49:12 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 12:49:12 localhost systemd[1]: Reached target Network.
Jan 22 12:49:12 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 12:49:12 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 22 12:49:12 localhost kernel:  vda: vda1
Jan 22 12:49:12 localhost systemd[1]: Starting dracut initqueue hook...
Jan 22 12:49:12 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 22 12:49:12 localhost systemd[1]: Reached target System Initialization.
Jan 22 12:49:12 localhost systemd[1]: Reached target Basic System.
Jan 22 12:49:12 localhost kernel: libata version 3.00 loaded.
Jan 22 12:49:12 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 22 12:49:12 localhost kernel: scsi host0: ata_piix
Jan 22 12:49:12 localhost kernel: scsi host1: ata_piix
Jan 22 12:49:12 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 22 12:49:12 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 22 12:49:12 localhost systemd-udevd[489]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 12:49:12 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 12:49:12 localhost systemd[1]: Reached target Initrd Root Device.
Jan 22 12:49:12 localhost kernel: ata1: found unknown device (class 0)
Jan 22 12:49:12 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 22 12:49:12 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 22 12:49:12 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 22 12:49:12 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 22 12:49:12 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 12:49:12 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 22 12:49:12 localhost systemd[1]: Finished dracut initqueue hook.
Jan 22 12:49:12 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 12:49:12 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 22 12:49:12 localhost systemd[1]: Reached target Remote File Systems.
Jan 22 12:49:12 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 22 12:49:12 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 22 12:49:12 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 22 12:49:12 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 22 12:49:12 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 12:49:12 localhost systemd[1]: Mounting /sysroot...
Jan 22 12:49:13 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 22 12:49:13 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 22 12:49:13 localhost kernel: XFS (vda1): Ending clean mount
Jan 22 12:49:13 localhost systemd[1]: Mounted /sysroot.
Jan 22 12:49:13 localhost systemd[1]: Reached target Initrd Root File System.
Jan 22 12:49:13 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 22 12:49:13 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 22 12:49:13 localhost systemd[1]: Reached target Initrd File Systems.
Jan 22 12:49:13 localhost systemd[1]: Reached target Initrd Default Target.
Jan 22 12:49:13 localhost systemd[1]: Starting dracut mount hook...
Jan 22 12:49:13 localhost systemd[1]: Finished dracut mount hook.
Jan 22 12:49:13 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 22 12:49:13 localhost rpc.idmapd[450]: exiting on signal 15
Jan 22 12:49:13 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 22 12:49:13 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 22 12:49:13 localhost systemd[1]: Stopped target Network.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Timer Units.
Jan 22 12:49:13 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Basic System.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Path Units.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Remote File Systems.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Slice Units.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Socket Units.
Jan 22 12:49:13 localhost systemd[1]: Stopped target System Initialization.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Local File Systems.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Swaps.
Jan 22 12:49:13 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut mount hook.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 22 12:49:13 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 22 12:49:13 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 22 12:49:13 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 22 12:49:13 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 22 12:49:13 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 22 12:49:13 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 22 12:49:13 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 22 12:49:13 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd.service: Consumed 1.053s CPU time.
Jan 22 12:49:13 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Closed udev Control Socket.
Jan 22 12:49:13 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Closed udev Kernel Socket.
Jan 22 12:49:13 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 22 12:49:13 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 22 12:49:13 localhost systemd[1]: Starting Cleanup udev Database...
Jan 22 12:49:13 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 22 12:49:13 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 22 12:49:13 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Stopped Create System Users.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 22 12:49:13 localhost systemd[1]: Finished Cleanup udev Database.
Jan 22 12:49:13 localhost systemd[1]: Reached target Switch Root.
Jan 22 12:49:13 localhost systemd[1]: Starting Switch Root...
Jan 22 12:49:13 localhost systemd[1]: Switching root.
Jan 22 12:49:13 localhost systemd-journald[307]: Journal stopped
Jan 22 12:49:14 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Jan 22 12:49:14 localhost kernel: audit: type=1404 audit(1769086153.822:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability open_perms=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 12:49:14 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 12:49:14 localhost kernel: audit: type=1403 audit(1769086153.961:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 22 12:49:14 localhost systemd[1]: Successfully loaded SELinux policy in 143.081ms.
Jan 22 12:49:14 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26ms.
Jan 22 12:49:14 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 12:49:14 localhost systemd[1]: Detected virtualization kvm.
Jan 22 12:49:14 localhost systemd[1]: Detected architecture x86-64.
Jan 22 12:49:14 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 12:49:14 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Stopped Switch Root.
Jan 22 12:49:14 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 22 12:49:14 localhost systemd[1]: Created slice Slice /system/getty.
Jan 22 12:49:14 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 22 12:49:14 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 22 12:49:14 localhost systemd[1]: Created slice User and Session Slice.
Jan 22 12:49:14 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 12:49:14 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 22 12:49:14 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 22 12:49:14 localhost systemd[1]: Stopped target Switch Root.
Jan 22 12:49:14 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 22 12:49:14 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 22 12:49:14 localhost systemd[1]: Reached target Path Units.
Jan 22 12:49:14 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 22 12:49:14 localhost systemd[1]: Reached target Slice Units.
Jan 22 12:49:14 localhost systemd[1]: Reached target Swaps.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 22 12:49:14 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 22 12:49:14 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 22 12:49:14 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 22 12:49:14 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 22 12:49:14 localhost systemd[1]: Listening on udev Control Socket.
Jan 22 12:49:14 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 22 12:49:14 localhost systemd[1]: Mounting Huge Pages File System...
Jan 22 12:49:14 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 22 12:49:14 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 22 12:49:14 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 22 12:49:14 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 12:49:14 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 22 12:49:14 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 22 12:49:14 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 22 12:49:14 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 22 12:49:14 localhost systemd[1]: Stopped Journal Service.
Jan 22 12:49:14 localhost kernel: fuse: init (API version 7.37)
Jan 22 12:49:14 localhost systemd[1]: Starting Journal Service...
Jan 22 12:49:14 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 22 12:49:14 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 12:49:14 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 22 12:49:14 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 22 12:49:14 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 22 12:49:14 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 22 12:49:14 localhost systemd[1]: Mounted Huge Pages File System.
Jan 22 12:49:14 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 22 12:49:14 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 22 12:49:14 localhost systemd-journald[675]: Journal started
Jan 22 12:49:14 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 12:49:14 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 22 12:49:14 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 22 12:49:14 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Started Journal Service.
Jan 22 12:49:14 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 22 12:49:14 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 22 12:49:14 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 22 12:49:14 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 22 12:49:14 localhost kernel: ACPI: bus type drm_connector registered
Jan 22 12:49:14 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 22 12:49:14 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 22 12:49:14 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 22 12:49:14 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 22 12:49:14 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 22 12:49:14 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 22 12:49:14 localhost systemd[1]: Mounting FUSE Control File System...
Jan 22 12:49:14 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 12:49:14 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 22 12:49:14 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 22 12:49:14 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 22 12:49:14 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 22 12:49:14 localhost systemd[1]: Starting Create System Users...
Jan 22 12:49:14 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 12:49:14 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 22 12:49:14 localhost systemd[1]: Mounted FUSE Control File System.
Jan 22 12:49:14 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 22 12:49:14 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 22 12:49:14 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 12:49:14 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 22 12:49:14 localhost systemd[1]: Finished Create System Users.
Jan 22 12:49:14 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 12:49:14 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 12:49:14 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 22 12:49:14 localhost systemd[1]: Reached target Local File Systems.
Jan 22 12:49:14 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 22 12:49:14 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 22 12:49:14 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 22 12:49:14 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 22 12:49:14 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 22 12:49:14 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 12:49:14 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Jan 22 12:49:14 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 22 12:49:14 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 22 12:49:14 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 12:49:14 localhost systemd[1]: Starting Security Auditing Service...
Jan 22 12:49:14 localhost systemd[1]: Starting RPC Bind...
Jan 22 12:49:14 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 22 12:49:14 localhost auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 22 12:49:14 localhost auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 22 12:49:14 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 22 12:49:14 localhost systemd[1]: Started RPC Bind.
Jan 22 12:49:14 localhost augenrules[704]: /sbin/augenrules: No change
Jan 22 12:49:14 localhost augenrules[719]: No rules
Jan 22 12:49:14 localhost augenrules[719]: enabled 1
Jan 22 12:49:14 localhost augenrules[719]: failure 1
Jan 22 12:49:14 localhost augenrules[719]: pid 699
Jan 22 12:49:14 localhost augenrules[719]: rate_limit 0
Jan 22 12:49:14 localhost augenrules[719]: backlog_limit 8192
Jan 22 12:49:14 localhost augenrules[719]: lost 0
Jan 22 12:49:14 localhost augenrules[719]: backlog 3
Jan 22 12:49:14 localhost augenrules[719]: backlog_wait_time 60000
Jan 22 12:49:14 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 22 12:49:14 localhost augenrules[719]: enabled 1
Jan 22 12:49:14 localhost augenrules[719]: failure 1
Jan 22 12:49:14 localhost augenrules[719]: pid 699
Jan 22 12:49:14 localhost augenrules[719]: rate_limit 0
Jan 22 12:49:14 localhost augenrules[719]: backlog_limit 8192
Jan 22 12:49:14 localhost augenrules[719]: lost 0
Jan 22 12:49:14 localhost augenrules[719]: backlog 2
Jan 22 12:49:14 localhost augenrules[719]: backlog_wait_time 60000
Jan 22 12:49:14 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 22 12:49:14 localhost augenrules[719]: enabled 1
Jan 22 12:49:14 localhost augenrules[719]: failure 1
Jan 22 12:49:14 localhost augenrules[719]: pid 699
Jan 22 12:49:14 localhost augenrules[719]: rate_limit 0
Jan 22 12:49:14 localhost augenrules[719]: backlog_limit 8192
Jan 22 12:49:14 localhost augenrules[719]: lost 0
Jan 22 12:49:14 localhost augenrules[719]: backlog 2
Jan 22 12:49:14 localhost augenrules[719]: backlog_wait_time 60000
Jan 22 12:49:14 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 22 12:49:14 localhost systemd[1]: Started Security Auditing Service.
Jan 22 12:49:14 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 22 12:49:14 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 22 12:49:15 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 22 12:49:15 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 12:49:15 localhost systemd[1]: Starting Update is Completed...
Jan 22 12:49:15 localhost systemd[1]: Finished Update is Completed.
Jan 22 12:49:15 localhost systemd-udevd[727]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 12:49:15 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 12:49:15 localhost systemd[1]: Reached target System Initialization.
Jan 22 12:49:15 localhost systemd[1]: Started dnf makecache --timer.
Jan 22 12:49:15 localhost systemd[1]: Started Daily rotation of log files.
Jan 22 12:49:15 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 22 12:49:15 localhost systemd[1]: Reached target Timer Units.
Jan 22 12:49:15 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 12:49:15 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 22 12:49:15 localhost systemd[1]: Reached target Socket Units.
Jan 22 12:49:15 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 22 12:49:15 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 12:49:15 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 22 12:49:15 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 22 12:49:15 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 12:49:15 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 22 12:49:15 localhost systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 12:49:15 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 22 12:49:15 localhost systemd[1]: Reached target Basic System.
Jan 22 12:49:15 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 22 12:49:15 localhost dbus-broker-lau[760]: Ready
Jan 22 12:49:15 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 22 12:49:15 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 22 12:49:15 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 22 12:49:15 localhost systemd[1]: Starting NTP client/server...
Jan 22 12:49:15 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 22 12:49:15 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 22 12:49:15 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 22 12:49:15 localhost systemd[1]: Started irqbalance daemon.
Jan 22 12:49:15 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 22 12:49:15 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 12:49:15 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 12:49:15 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 12:49:15 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 22 12:49:15 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 22 12:49:15 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 22 12:49:15 localhost systemd[1]: Starting User Login Management...
Jan 22 12:49:15 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 22 12:49:15 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 22 12:49:15 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 22 12:49:15 localhost kernel: Console: switching to colour dummy device 80x25
Jan 22 12:49:15 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 22 12:49:15 localhost kernel: [drm] features: -context_init
Jan 22 12:49:15 localhost kernel: [drm] number of scanouts: 1
Jan 22 12:49:15 localhost kernel: [drm] number of cap sets: 0
Jan 22 12:49:15 localhost chronyd[798]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 12:49:15 localhost chronyd[798]: Loaded 0 symmetric keys
Jan 22 12:49:15 localhost chronyd[798]: Using right/UTC timezone to obtain leap second data
Jan 22 12:49:15 localhost chronyd[798]: Loaded seccomp filter (level 2)
Jan 22 12:49:15 localhost systemd[1]: Started NTP client/server.
Jan 22 12:49:15 localhost systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 12:49:15 localhost systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 12:49:15 localhost systemd-logind[787]: New seat seat0.
Jan 22 12:49:15 localhost systemd[1]: Started User Login Management.
Jan 22 12:49:15 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 22 12:49:15 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 22 12:49:15 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 22 12:49:15 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 22 12:49:15 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 22 12:49:15 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 22 12:49:15 localhost kernel: kvm_amd: TSC scaling supported
Jan 22 12:49:15 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 22 12:49:15 localhost kernel: kvm_amd: Nested Paging enabled
Jan 22 12:49:15 localhost kernel: kvm_amd: LBR virtualization supported
Jan 22 12:49:15 localhost iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Jan 22 12:49:15 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 22 12:49:15 localhost cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 22 Jan 2026 12:49:15 +0000. Up 6.40 seconds.
Jan 22 12:49:15 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 22 12:49:15 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 22 12:49:15 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpngi0hxr5.mount: Deactivated successfully.
Jan 22 12:49:15 localhost systemd[1]: Starting Hostname Service...
Jan 22 12:49:16 localhost systemd[1]: Started Hostname Service.
Jan 22 12:49:16 np0005592159.novalocal systemd-hostnamed[850]: Hostname set to <np0005592159.novalocal> (static)
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Reached target Preparation for Network.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Starting Network Manager...
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2719] NetworkManager (version 1.54.3-2.el9) is starting... (boot:24f4eb82-7451-47a9-a2ab-85f318c16b8a)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2726] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2810] manager[0x56014830a000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2861] hostname: hostname: using hostnamed
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2861] hostname: static hostname changed from (none) to "np0005592159.novalocal"
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2866] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2975] manager[0x56014830a000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.2976] manager[0x56014830a000]: rfkill: WWAN hardware radio set enabled
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3031] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3032] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3032] manager: Networking is enabled by state file
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3034] settings: Loaded settings plugin: keyfile (internal)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3045] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3068] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3080] dhcp: init: Using DHCP client 'internal'
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3083] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3098] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3105] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3115] device (lo): Activation: starting connection 'lo' (4169075c-72f8-4434-940a-1a390ca696d3)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3126] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3129] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3175] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3179] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3181] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3184] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3186] device (eth0): carrier: link connected
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3189] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3195] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3201] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3205] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3206] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3210] manager: NetworkManager state is now CONNECTING
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3211] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3219] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3222] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Started Network Manager.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Reached target Network.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3494] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3497] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.3504] device (lo): Activation: successful, device activated.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Reached target NFS client services.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Reached target Remote File Systems.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5861] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5870] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5891] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5933] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5935] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5938] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5940] device (eth0): Activation: successful, device activated.
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5945] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 12:49:16 np0005592159.novalocal NetworkManager[854]: <info>  [1769086156.5947] manager: startup complete
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 22 12:49:16 np0005592159.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 22 Jan 2026 12:49:16 +0000. Up 7.69 seconds.
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |  eth0  | True |         38.102.83.5          | 255.255.255.0 | global | fa:16:3e:9d:96:b7 |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe9d:96b7/64 |       .       |  link  | fa:16:3e:9d:96:b7 |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 22 12:49:16 np0005592159.novalocal cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 22 12:49:17 np0005592159.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 12:49:17 np0005592159.novalocal useradd[985]: new group: name=cloud-user, GID=1001
Jan 22 12:49:17 np0005592159.novalocal useradd[985]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 22 12:49:17 np0005592159.novalocal useradd[985]: add 'cloud-user' to group 'adm'
Jan 22 12:49:17 np0005592159.novalocal useradd[985]: add 'cloud-user' to group 'systemd-journal'
Jan 22 12:49:17 np0005592159.novalocal useradd[985]: add 'cloud-user' to shadow group 'adm'
Jan 22 12:49:17 np0005592159.novalocal useradd[985]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Generating public/private rsa key pair.
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: The key fingerprint is:
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: SHA256:ZrAkF9Xrv+nsA28p+s/bLd5i3L5ajk6r69DLcZjH8XE root@np0005592159.novalocal
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: The key's randomart image is:
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: +---[RSA 3072]----+
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |      ....       |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |       .  .      |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |    . +    .     |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |     + o  .      |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |      . S.   . .E|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |       o  + + o o|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |         . O B + |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |          +o%oXo.|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |        .oo%&@+*=|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: The key fingerprint is:
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: SHA256:N07ntx9a6ee1wXNKZCLxHO+EmOEsTiu/ut92zp8Te3Q root@np0005592159.novalocal
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: The key's randomart image is:
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: +---[ECDSA 256]---+
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |                 |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |                 |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |          o .    |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |         o B +   |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |        S X * =  |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |       o * = *o E|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |      . o . . +@+|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |       o ....o*+X|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |      o++o.ooo=B+|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: The key fingerprint is:
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: SHA256:1FUgzkEXvMlB1Efuil9gJ1N6+xIx3oHGUMLDz4R4tWE root@np0005592159.novalocal
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: The key's randomart image is:
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: +--[ED25519 256]--+
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |         .B=@E...|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |         = %=.+..|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |        . =.B=..o|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |       .    +*o= |
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |        S   ..*+=|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |             ooBo|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |            . .o.|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |             ....|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: |              ...|
Jan 22 12:49:18 np0005592159.novalocal cloud-init[918]: +----[SHA256]-----+
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Reached target Network is Online.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting System Logging Service...
Jan 22 12:49:18 np0005592159.novalocal sm-notify[1001]: Version 2.5.4 starting
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting Permit User Sessions...
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 22 12:49:18 np0005592159.novalocal sshd[1003]: Server listening on 0.0.0.0 port 22.
Jan 22 12:49:18 np0005592159.novalocal sshd[1003]: Server listening on :: port 22.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Finished Permit User Sessions.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Started Command Scheduler.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Started Getty on tty1.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 22 12:49:18 np0005592159.novalocal crond[1006]: (CRON) STARTUP (1.5.7)
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Reached target Login Prompts.
Jan 22 12:49:18 np0005592159.novalocal crond[1006]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 22 12:49:18 np0005592159.novalocal crond[1006]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 52% if used.)
Jan 22 12:49:18 np0005592159.novalocal crond[1006]: (CRON) INFO (running with inotify support)
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Started System Logging Service.
Jan 22 12:49:18 np0005592159.novalocal rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Jan 22 12:49:18 np0005592159.novalocal rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Reached target Multi-User System.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 22 12:49:18 np0005592159.novalocal rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 12:49:18 np0005592159.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Jan 22 12:49:18 np0005592159.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 22 12:49:18 np0005592159.novalocal cloud-init[1129]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 22 Jan 2026 12:49:18 +0000. Up 9.51 seconds.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 22 12:49:18 np0005592159.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 22 12:49:19 np0005592159.novalocal dracut[1264]: dracut-057-102.git20250818.el9
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1272]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 22 Jan 2026 12:49:19 +0000. Up 9.87 seconds.
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1282]: #############################################################
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1283]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1285]: 256 SHA256:N07ntx9a6ee1wXNKZCLxHO+EmOEsTiu/ut92zp8Te3Q root@np0005592159.novalocal (ECDSA)
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1287]: 256 SHA256:1FUgzkEXvMlB1Efuil9gJ1N6+xIx3oHGUMLDz4R4tWE root@np0005592159.novalocal (ED25519)
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1289]: 3072 SHA256:ZrAkF9Xrv+nsA28p+s/bLd5i3L5ajk6r69DLcZjH8XE root@np0005592159.novalocal (RSA)
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1290]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1291]: #############################################################
Jan 22 12:49:19 np0005592159.novalocal cloud-init[1272]: Cloud-init v. 24.4-8.el9 finished at Thu, 22 Jan 2026 12:49:19 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.08 seconds
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 22 12:49:19 np0005592159.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 22 12:49:19 np0005592159.novalocal systemd[1]: Reached target Cloud-init target.
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 12:49:19 np0005592159.novalocal dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: memstrack is not available
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: memstrack is not available
Jan 22 12:49:20 np0005592159.novalocal dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 12:49:21 np0005592159.novalocal dracut[1266]: *** Including module: systemd ***
Jan 22 12:49:21 np0005592159.novalocal dracut[1266]: *** Including module: fips ***
Jan 22 12:49:21 np0005592159.novalocal chronyd[798]: Selected source 198.181.199.84 (2.centos.pool.ntp.org)
Jan 22 12:49:21 np0005592159.novalocal chronyd[798]: System clock TAI offset set to 37 seconds
Jan 22 12:49:21 np0005592159.novalocal dracut[1266]: *** Including module: systemd-initrd ***
Jan 22 12:49:21 np0005592159.novalocal dracut[1266]: *** Including module: i18n ***
Jan 22 12:49:21 np0005592159.novalocal dracut[1266]: *** Including module: drm ***
Jan 22 12:49:22 np0005592159.novalocal dracut[1266]: *** Including module: prefixdevname ***
Jan 22 12:49:22 np0005592159.novalocal dracut[1266]: *** Including module: kernel-modules ***
Jan 22 12:49:22 np0005592159.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: *** Including module: kernel-modules-extra ***
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: *** Including module: qemu ***
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: *** Including module: fstab-sys ***
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: *** Including module: rootfs-block ***
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: *** Including module: terminfo ***
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: *** Including module: udev-rules ***
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: Skipping udev rule: 91-permissions.rules
Jan 22 12:49:23 np0005592159.novalocal dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: virtiofs ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: dracut-systemd ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: usrmount ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: base ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: fs-lib ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: kdumpbase ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]:   microcode_ctl module: mangling fw_dir
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 22 12:49:24 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: IRQ 25 affinity is now unmanaged
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: IRQ 31 affinity is now unmanaged
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: IRQ 28 affinity is now unmanaged
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: IRQ 32 affinity is now unmanaged
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: IRQ 30 affinity is now unmanaged
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 22 12:49:25 np0005592159.novalocal irqbalance[785]: IRQ 29 affinity is now unmanaged
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]: *** Including module: openssl ***
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]: *** Including module: shutdown ***
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]: *** Including module: squash ***
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]: *** Including modules done ***
Jan 22 12:49:25 np0005592159.novalocal dracut[1266]: *** Installing kernel module dependencies ***
Jan 22 12:49:26 np0005592159.novalocal dracut[1266]: *** Installing kernel module dependencies done ***
Jan 22 12:49:26 np0005592159.novalocal dracut[1266]: *** Resolving executable dependencies ***
Jan 22 12:49:26 np0005592159.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 12:49:28 np0005592159.novalocal dracut[1266]: *** Resolving executable dependencies done ***
Jan 22 12:49:28 np0005592159.novalocal dracut[1266]: *** Generating early-microcode cpio image ***
Jan 22 12:49:28 np0005592159.novalocal dracut[1266]: *** Store current command line parameters ***
Jan 22 12:49:28 np0005592159.novalocal dracut[1266]: Stored kernel commandline:
Jan 22 12:49:28 np0005592159.novalocal dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Jan 22 12:49:28 np0005592159.novalocal dracut[1266]: *** Install squash loader ***
Jan 22 12:49:29 np0005592159.novalocal dracut[1266]: *** Squashing the files inside the initramfs ***
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4129]: Connection reset by 38.102.83.114 port 54080 [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4131]: Unable to negotiate with 38.102.83.114 port 54084: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4133]: Connection closed by 38.102.83.114 port 54090 [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4135]: Unable to negotiate with 38.102.83.114 port 54104: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4137]: Unable to negotiate with 38.102.83.114 port 54108: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4139]: Connection reset by 38.102.83.114 port 54122 [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4141]: Connection reset by 38.102.83.114 port 54134 [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4143]: Unable to negotiate with 38.102.83.114 port 54142: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 22 12:49:29 np0005592159.novalocal sshd-session[4145]: Unable to negotiate with 38.102.83.114 port 54146: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: *** Squashing the files inside the initramfs done ***
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: *** Hardlinking files ***
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Mode:           real
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Files:          50
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Linked:         0 files
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Compared:       0 xattrs
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Compared:       0 files
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Saved:          0 B
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: Duration:       0.000904 seconds
Jan 22 12:49:30 np0005592159.novalocal dracut[1266]: *** Hardlinking files done ***
Jan 22 12:49:31 np0005592159.novalocal dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 22 12:49:31 np0005592159.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Jan 22 12:49:31 np0005592159.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Jan 22 12:49:31 np0005592159.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 22 12:49:31 np0005592159.novalocal systemd[1]: Startup finished in 1.753s (kernel) + 2.827s (initrd) + 17.752s (userspace) = 22.334s.
Jan 22 12:49:46 np0005592159.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 12:50:30 np0005592159.novalocal sshd-session[4301]: Accepted publickey for zuul from 38.102.83.114 port 53856 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 22 12:50:30 np0005592159.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 22 12:50:30 np0005592159.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 22 12:50:30 np0005592159.novalocal systemd-logind[787]: New session 1 of user zuul.
Jan 22 12:50:30 np0005592159.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 22 12:50:30 np0005592159.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Queued start job for default target Main User Target.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Created slice User Application Slice.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Reached target Paths.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Reached target Timers.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Starting D-Bus User Message Bus Socket...
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Starting Create User's Volatile Files and Directories...
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Listening on D-Bus User Message Bus Socket.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Reached target Sockets.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Finished Create User's Volatile Files and Directories.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Reached target Basic System.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Reached target Main User Target.
Jan 22 12:50:30 np0005592159.novalocal systemd[4305]: Startup finished in 164ms.
Jan 22 12:50:30 np0005592159.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 22 12:50:30 np0005592159.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 22 12:50:30 np0005592159.novalocal sshd-session[4301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:50:31 np0005592159.novalocal python3[4387]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:35 np0005592159.novalocal sshd-session[4392]: Invalid user sol from 45.148.10.240 port 44080
Jan 22 12:50:35 np0005592159.novalocal python3[4417]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:35 np0005592159.novalocal sshd-session[4392]: Connection closed by invalid user sol 45.148.10.240 port 44080 [preauth]
Jan 22 12:50:41 np0005592159.novalocal python3[4475]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:42 np0005592159.novalocal python3[4515]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 22 12:50:44 np0005592159.novalocal python3[4541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1DCoRB3r0Iy6aGg4LRzpWVb+uDCW+ivahM6mnwYTzs7NyJlgPrnZ6PV7GhjThi3qMi3wdL9+LpBaBPuOhI+k1w3f1FS+zKP3/xb59Ck+AhF8LIp3InS3sgWlvIGvXYvlwuN3aBMHp/hbvFOtbZFxgXhvIlVsk+m1K/J/50vtBBzyri7EjoTWDvY18FZoapjDeqss1t7AvCXVAcsVOfZsyssdWALG/AlGcmeZ9kZ/yza1tS0t7avldh0ZazNkLg/5jp3HQrTFLiETLQx8tBjdEj0Pme6UqjG17uVJkEVl4g3FLGiT4krCLRjW0sA3E3rd5e1m4tBIoSSqoqN2E+V9ctp/6T9Vpe3OcZdgKBUE9yz4tlHgQLxksFY2SiXEQYiWTctsRY30EsMJk2Qg65Fyp/ts6u4u66Uo27jNRB+ZD/vnAY4IKu94a2+6uIW/9oShh4f1cWrBlFzxXaUBj4KHar7HFljsOCavs7NCPccp7JoW8FoXONrfM+rhSgDbeDGE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:50:45 np0005592159.novalocal python3[4565]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:45 np0005592159.novalocal python3[4664]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:50:46 np0005592159.novalocal python3[4735]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086245.5038712-253-258230427090939/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa follow=False checksum=9eec2026f94d681755d58aa430eaf5c6b319017b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:46 np0005592159.novalocal python3[4858]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:50:47 np0005592159.novalocal python3[4929]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086246.4849834-308-233483319450064/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa.pub follow=False checksum=f8a39b98331ab3302b65dacd0b8176268aaf7e5b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:49 np0005592159.novalocal python3[4977]: ansible-ping Invoked with data=pong
Jan 22 12:50:50 np0005592159.novalocal python3[5001]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 12:50:52 np0005592159.novalocal python3[5059]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 22 12:50:53 np0005592159.novalocal python3[5091]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:54 np0005592159.novalocal python3[5115]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:54 np0005592159.novalocal python3[5139]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:54 np0005592159.novalocal python3[5163]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:55 np0005592159.novalocal python3[5187]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:55 np0005592159.novalocal python3[5211]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:57 np0005592159.novalocal sudo[5235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpaotylznmrmpqlfpstgbpwzxugjsjr ; /usr/bin/python3'
Jan 22 12:50:57 np0005592159.novalocal sudo[5235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:50:57 np0005592159.novalocal python3[5237]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:57 np0005592159.novalocal sudo[5235]: pam_unix(sudo:session): session closed for user root
Jan 22 12:50:57 np0005592159.novalocal sudo[5313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieoxtxbbhdrmqfcttmpzmgkefdptzbuf ; /usr/bin/python3'
Jan 22 12:50:57 np0005592159.novalocal sudo[5313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:50:58 np0005592159.novalocal python3[5315]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:50:58 np0005592159.novalocal sudo[5313]: pam_unix(sudo:session): session closed for user root
Jan 22 12:50:58 np0005592159.novalocal sudo[5386]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmrrmyxxudlqjliylblivjponrpwmnzi ; /usr/bin/python3'
Jan 22 12:50:58 np0005592159.novalocal sudo[5386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:50:58 np0005592159.novalocal python3[5388]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086257.5701644-34-132735642924106/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:50:58 np0005592159.novalocal sudo[5386]: pam_unix(sudo:session): session closed for user root
Jan 22 12:50:59 np0005592159.novalocal python3[5436]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:50:59 np0005592159.novalocal python3[5460]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:50:59 np0005592159.novalocal python3[5484]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592159.novalocal python3[5508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592159.novalocal python3[5532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592159.novalocal python3[5556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:00 np0005592159.novalocal python3[5580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592159.novalocal python3[5604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592159.novalocal python3[5628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:01 np0005592159.novalocal python3[5652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592159.novalocal python3[5676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592159.novalocal python3[5700]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592159.novalocal python3[5724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:02 np0005592159.novalocal python3[5748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:03 np0005592159.novalocal python3[5772]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:03 np0005592159.novalocal python3[5796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:03 np0005592159.novalocal python3[5820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592159.novalocal python3[5844]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592159.novalocal python3[5868]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592159.novalocal python3[5892]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:04 np0005592159.novalocal python3[5916]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:05 np0005592159.novalocal python3[5940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:05 np0005592159.novalocal python3[5964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:05 np0005592159.novalocal python3[5988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:06 np0005592159.novalocal python3[6012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:06 np0005592159.novalocal python3[6036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 12:51:08 np0005592159.novalocal sudo[6060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyxmfwngsykzrwnwddsfhpfmzznvfbhh ; /usr/bin/python3'
Jan 22 12:51:08 np0005592159.novalocal sudo[6060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:08 np0005592159.novalocal python3[6062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 12:51:08 np0005592159.novalocal systemd[1]: Starting Time & Date Service...
Jan 22 12:51:08 np0005592159.novalocal systemd[1]: Started Time & Date Service.
Jan 22 12:51:09 np0005592159.novalocal systemd-timedated[6064]: Changed time zone to 'UTC' (UTC).
Jan 22 12:51:09 np0005592159.novalocal sudo[6060]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:09 np0005592159.novalocal sudo[6091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huthqoftdsojwfaptfvvbqucegpzpgbu ; /usr/bin/python3'
Jan 22 12:51:09 np0005592159.novalocal sudo[6091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:09 np0005592159.novalocal python3[6093]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:09 np0005592159.novalocal sudo[6091]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:10 np0005592159.novalocal python3[6169]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:10 np0005592159.novalocal python3[6240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769086269.735884-254-188511559888107/source _original_basename=tmplj16a1bi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:11 np0005592159.novalocal python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:11 np0005592159.novalocal python3[6411]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086270.8039196-305-15278230150983/source _original_basename=tmp7ik5k7i8 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:12 np0005592159.novalocal sudo[6511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxgyhcdyqbyoxttkawltkizkjagbasyt ; /usr/bin/python3'
Jan 22 12:51:12 np0005592159.novalocal sudo[6511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:12 np0005592159.novalocal python3[6513]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:12 np0005592159.novalocal sudo[6511]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:13 np0005592159.novalocal sudo[6584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzhxbycyunwsmgzdstxnabiiqefsiwtg ; /usr/bin/python3'
Jan 22 12:51:13 np0005592159.novalocal sudo[6584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:13 np0005592159.novalocal python3[6586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086272.489041-384-194965756008516/source _original_basename=tmpvj899g3v follow=False checksum=19d309ebea5b58181725fc1dc4cea95ea4d18865 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:13 np0005592159.novalocal sudo[6584]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:13 np0005592159.novalocal python3[6634]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:51:14 np0005592159.novalocal python3[6660]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:51:14 np0005592159.novalocal sudo[6738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihwrqelpqsjxxsjvhemajrjyhiwezubz ; /usr/bin/python3'
Jan 22 12:51:14 np0005592159.novalocal sudo[6738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:14 np0005592159.novalocal python3[6740]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:51:14 np0005592159.novalocal sudo[6738]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:15 np0005592159.novalocal sudo[6811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyplomeoxjkfdzfxhrxrdhbsdbbubled ; /usr/bin/python3'
Jan 22 12:51:15 np0005592159.novalocal sudo[6811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:15 np0005592159.novalocal python3[6813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086274.4080887-454-224720339979334/source _original_basename=tmpzkadzulz follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:15 np0005592159.novalocal sudo[6811]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:15 np0005592159.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbryrsmytacohhufqrbgvhdpcbaefas ; /usr/bin/python3'
Jan 22 12:51:15 np0005592159.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:15 np0005592159.novalocal sshd-session[6839]: Connection closed by 203.55.131.5 port 60126 [preauth]
Jan 22 12:51:16 np0005592159.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-37d2-1cc7-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:51:16 np0005592159.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 22 12:51:16 np0005592159.novalocal python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-37d2-1cc7-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 22 12:51:18 np0005592159.novalocal python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:24 np0005592159.novalocal sshd-session[6923]: Invalid user  from 64.62.156.25 port 47305
Jan 22 12:51:28 np0005592159.novalocal sshd-session[6923]: Connection closed by invalid user  64.62.156.25 port 47305 [preauth]
Jan 22 12:51:39 np0005592159.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 12:51:42 np0005592159.novalocal sudo[6950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihqogofdshkmxdtaoldtfamwjkaasacv ; /usr/bin/python3'
Jan 22 12:51:42 np0005592159.novalocal sudo[6950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:51:43 np0005592159.novalocal python3[6952]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:51:43 np0005592159.novalocal sudo[6950]: pam_unix(sudo:session): session closed for user root
Jan 22 12:52:43 np0005592159.novalocal sshd-session[4314]: Received disconnect from 38.102.83.114 port 53856:11: disconnected by user
Jan 22 12:52:43 np0005592159.novalocal sshd-session[4314]: Disconnected from user zuul 38.102.83.114 port 53856
Jan 22 12:52:43 np0005592159.novalocal sshd-session[4301]: pam_unix(sshd:session): session closed for user zuul
Jan 22 12:52:43 np0005592159.novalocal systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Jan 22 12:52:45 np0005592159.novalocal systemd[4305]: Starting Mark boot as successful...
Jan 22 12:52:45 np0005592159.novalocal systemd[4305]: Finished Mark boot as successful.
Jan 22 12:52:51 np0005592159.novalocal sshd-session[6954]: Invalid user solana from 45.148.10.240 port 39068
Jan 22 12:52:51 np0005592159.novalocal sshd-session[6954]: Connection closed by invalid user solana 45.148.10.240 port 39068 [preauth]
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 22 12:53:20 np0005592159.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 22 12:53:20 np0005592159.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6576] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 12:53:20 np0005592159.novalocal systemd-udevd[6957]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6786] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6831] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6836] device (eth1): carrier: link connected
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6839] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6849] policy: auto-activating connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7)
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6856] device (eth1): Activation: starting connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7)
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6857] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6861] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6867] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 12:53:20 np0005592159.novalocal NetworkManager[854]: <info>  [1769086400.6875] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:21 np0005592159.novalocal sshd-session[6960]: Accepted publickey for zuul from 38.102.83.114 port 40280 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 12:53:21 np0005592159.novalocal systemd-logind[787]: New session 3 of user zuul.
Jan 22 12:53:21 np0005592159.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 22 12:53:21 np0005592159.novalocal sshd-session[6960]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:53:21 np0005592159.novalocal python3[6987]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-97dc-dff7-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:53:31 np0005592159.novalocal sudo[7065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqtzjwkreixxdewahrgamcruahfbrvf ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:53:31 np0005592159.novalocal sudo[7065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:53:32 np0005592159.novalocal python3[7067]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:53:32 np0005592159.novalocal sudo[7065]: pam_unix(sudo:session): session closed for user root
Jan 22 12:53:32 np0005592159.novalocal sudo[7138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmhgepcnbiyvgsxrkhgywmwzvndqwgq ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:53:32 np0005592159.novalocal sudo[7138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:53:32 np0005592159.novalocal python3[7140]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086411.7768462-206-203881184855265/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2700db3a9722b22b06523fa143bc24bf7058877a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:53:32 np0005592159.novalocal sudo[7138]: pam_unix(sudo:session): session closed for user root
Jan 22 12:53:32 np0005592159.novalocal sudo[7188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwkosttkseaqmjmrsoseshyqcfbfdhuu ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:53:32 np0005592159.novalocal sudo[7188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:53:33 np0005592159.novalocal python3[7190]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Stopping Network Manager...
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1652] caught SIGTERM, shutting down normally.
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1673] dhcp4 (eth0): canceled DHCP transaction
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1674] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1674] dhcp4 (eth0): state changed no lease
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1677] manager: NetworkManager state is now CONNECTING
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1814] dhcp4 (eth1): canceled DHCP transaction
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1814] dhcp4 (eth1): state changed no lease
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[854]: <info>  [1769086413.1891] exiting (success)
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Stopped Network Manager.
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: NetworkManager.service: Consumed 1.780s CPU time, 10.0M memory peak.
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Starting Network Manager...
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.2591] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:24f4eb82-7451-47a9-a2ab-85f318c16b8a)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.2595] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.2665] manager[0x563ab64fb000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Starting Hostname Service...
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Started Hostname Service.
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3724] hostname: hostname: using hostnamed
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3725] hostname: static hostname changed from (none) to "np0005592159.novalocal"
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3732] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3739] manager[0x563ab64fb000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3740] manager[0x563ab64fb000]: rfkill: WWAN hardware radio set enabled
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3788] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3788] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3789] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3791] manager: Networking is enabled by state file
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3794] settings: Loaded settings plugin: keyfile (internal)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3800] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3845] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3861] dhcp: init: Using DHCP client 'internal'
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3866] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3875] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3883] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3896] device (lo): Activation: starting connection 'lo' (4169075c-72f8-4434-940a-1a390ca696d3)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3907] device (eth0): carrier: link connected
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3915] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3924] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3925] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3936] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3950] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3959] device (eth1): carrier: link connected
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3966] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3976] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7) (indicated)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3977] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3986] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.3998] device (eth1): Activation: starting connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7)
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Started Network Manager.
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4005] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4011] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4016] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4019] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4022] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4028] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4032] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4036] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4041] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4051] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4064] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4080] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4086] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4111] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4118] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4126] device (lo): Activation: successful, device activated.
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4137] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4147] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 12:53:33 np0005592159.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4213] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4242] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4244] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4248] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4251] device (eth0): Activation: successful, device activated.
Jan 22 12:53:33 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086413.4257] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 12:53:33 np0005592159.novalocal sudo[7188]: pam_unix(sudo:session): session closed for user root
Jan 22 12:53:33 np0005592159.novalocal python3[7275]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-97dc-dff7-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 12:53:43 np0005592159.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 12:54:03 np0005592159.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2401] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 12:54:18 np0005592159.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 12:54:18 np0005592159.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2785] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2791] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2810] device (eth1): Activation: successful, device activated.
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2825] manager: startup complete
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2828] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <warn>  [1769086458.2848] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2866] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2984] dhcp4 (eth1): canceled DHCP transaction
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2985] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.2985] dhcp4 (eth1): state changed no lease
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3005] policy: auto-activating connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3012] device (eth1): Activation: starting connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3013] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3017] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3027] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3040] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3085] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3087] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 12:54:18 np0005592159.novalocal NetworkManager[7199]: <info>  [1769086458.3095] device (eth1): Activation: successful, device activated.
Jan 22 12:54:28 np0005592159.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 12:54:33 np0005592159.novalocal sshd-session[6963]: Received disconnect from 38.102.83.114 port 40280:11: disconnected by user
Jan 22 12:54:33 np0005592159.novalocal sshd-session[6963]: Disconnected from user zuul 38.102.83.114 port 40280
Jan 22 12:54:33 np0005592159.novalocal sshd-session[6960]: pam_unix(sshd:session): session closed for user zuul
Jan 22 12:54:33 np0005592159.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 22 12:54:33 np0005592159.novalocal systemd[1]: session-3.scope: Consumed 1.840s CPU time.
Jan 22 12:54:33 np0005592159.novalocal systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Jan 22 12:54:33 np0005592159.novalocal systemd-logind[787]: Removed session 3.
Jan 22 12:54:59 np0005592159.novalocal sshd-session[7306]: Accepted publickey for zuul from 38.102.83.114 port 41320 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 12:54:59 np0005592159.novalocal systemd-logind[787]: New session 4 of user zuul.
Jan 22 12:54:59 np0005592159.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 22 12:54:59 np0005592159.novalocal sshd-session[7306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 12:55:00 np0005592159.novalocal sudo[7385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzicbmgxroztqoumbxvepmlixstrfgyc ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:55:00 np0005592159.novalocal sudo[7385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:55:00 np0005592159.novalocal python3[7387]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 12:55:00 np0005592159.novalocal sudo[7385]: pam_unix(sudo:session): session closed for user root
Jan 22 12:55:00 np0005592159.novalocal sudo[7458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxjsolnbaljppomsapdygsmpxwcacshh ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 22 12:55:00 np0005592159.novalocal sudo[7458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 12:55:00 np0005592159.novalocal python3[7460]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086499.966251-373-55631708650966/source _original_basename=tmpado48coe follow=False checksum=5e7e0974f47bfd675c68ead6f6109233c4c9d481 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 12:55:00 np0005592159.novalocal sudo[7458]: pam_unix(sudo:session): session closed for user root
Jan 22 12:55:02 np0005592159.novalocal sshd-session[7309]: Connection closed by 38.102.83.114 port 41320
Jan 22 12:55:02 np0005592159.novalocal sshd-session[7306]: pam_unix(sshd:session): session closed for user zuul
Jan 22 12:55:02 np0005592159.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 22 12:55:02 np0005592159.novalocal systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Jan 22 12:55:02 np0005592159.novalocal systemd-logind[787]: Removed session 4.
Jan 22 12:55:05 np0005592159.novalocal sshd-session[7486]: Invalid user solana from 45.148.10.240 port 35494
Jan 22 12:55:05 np0005592159.novalocal sshd-session[7486]: Connection closed by invalid user solana 45.148.10.240 port 35494 [preauth]
Jan 22 12:55:45 np0005592159.novalocal systemd[4305]: Created slice User Background Tasks Slice.
Jan 22 12:55:45 np0005592159.novalocal systemd[4305]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 12:55:45 np0005592159.novalocal systemd[4305]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 12:57:21 np0005592159.novalocal sshd-session[7492]: Invalid user solana from 45.148.10.240 port 51384
Jan 22 12:57:21 np0005592159.novalocal sshd-session[7492]: Connection closed by invalid user solana 45.148.10.240 port 51384 [preauth]
Jan 22 12:59:34 np0005592159.novalocal sshd-session[7495]: Invalid user sol from 45.148.10.240 port 44040
Jan 22 12:59:34 np0005592159.novalocal sshd-session[7495]: Connection closed by invalid user sol 45.148.10.240 port 44040 [preauth]
Jan 22 12:59:43 np0005592159.novalocal sshd-session[7497]: Invalid user user from 69.12.83.184 port 47286
Jan 22 12:59:44 np0005592159.novalocal sshd-session[7497]: Connection closed by invalid user user 69.12.83.184 port 47286 [preauth]
Jan 22 13:00:10 np0005592159.novalocal sshd-session[7500]: Accepted publickey for zuul from 38.102.83.114 port 38850 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:00:10 np0005592159.novalocal systemd-logind[787]: New session 5 of user zuul.
Jan 22 13:00:10 np0005592159.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 22 13:00:10 np0005592159.novalocal sshd-session[7500]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:00:10 np0005592159.novalocal sudo[7527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmvuolwslicwciozubxsowegfsbmune ; /usr/bin/python3'
Jan 22 13:00:10 np0005592159.novalocal sudo[7527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:10 np0005592159.novalocal python3[7529]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca0-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:10 np0005592159.novalocal sudo[7527]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:11 np0005592159.novalocal sudo[7557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekebjulfhljnobaxqncfbhtojkbpwocj ; /usr/bin/python3'
Jan 22 13:00:11 np0005592159.novalocal sudo[7557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:11 np0005592159.novalocal python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:11 np0005592159.novalocal sudo[7557]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:11 np0005592159.novalocal sudo[7583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thbcujmbwkkavdgqkuborvprvijawgea ; /usr/bin/python3'
Jan 22 13:00:11 np0005592159.novalocal sudo[7583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592159.novalocal python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:12 np0005592159.novalocal sudo[7583]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:12 np0005592159.novalocal sudo[7609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwoccmxhkeybbalcojencfpigycdjmll ; /usr/bin/python3'
Jan 22 13:00:12 np0005592159.novalocal sudo[7609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592159.novalocal python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:12 np0005592159.novalocal sudo[7609]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:12 np0005592159.novalocal sudo[7635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsttmyrzthpmwbkzclarqxkqeyarnvbj ; /usr/bin/python3'
Jan 22 13:00:12 np0005592159.novalocal sudo[7635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:12 np0005592159.novalocal python3[7637]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:12 np0005592159.novalocal sudo[7635]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:12 np0005592159.novalocal sudo[7661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edzxntjrvulnwfodbbqnhfgayypzgeau ; /usr/bin/python3'
Jan 22 13:00:12 np0005592159.novalocal sudo[7661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:13 np0005592159.novalocal python3[7663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:13 np0005592159.novalocal sudo[7661]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:13 np0005592159.novalocal sudo[7739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwiicryseqsefgvxdhyadvszyhxtwbtu ; /usr/bin/python3'
Jan 22 13:00:13 np0005592159.novalocal sudo[7739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:13 np0005592159.novalocal python3[7741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:00:13 np0005592159.novalocal sudo[7739]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:13 np0005592159.novalocal sudo[7812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylgsccmmlpprdujagqtwkvzercvhhfa ; /usr/bin/python3'
Jan 22 13:00:13 np0005592159.novalocal sudo[7812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:13 np0005592159.novalocal python3[7814]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086813.2488887-364-108910745133351/source _original_basename=tmpwmjwvnyv follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:00:13 np0005592159.novalocal sudo[7812]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:14 np0005592159.novalocal sudo[7862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqksynpwyccvvaiezpinlxvofwdtaoyr ; /usr/bin/python3'
Jan 22 13:00:14 np0005592159.novalocal sudo[7862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:14 np0005592159.novalocal python3[7864]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:00:14 np0005592159.novalocal systemd[1]: Reloading.
Jan 22 13:00:15 np0005592159.novalocal systemd-rc-local-generator[7881]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:00:15 np0005592159.novalocal sudo[7862]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:16 np0005592159.novalocal sudo[7917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiyazwxxinbttqbwwzythseptzckswle ; /usr/bin/python3'
Jan 22 13:00:16 np0005592159.novalocal sudo[7917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:16 np0005592159.novalocal python3[7919]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 22 13:00:16 np0005592159.novalocal sudo[7917]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:17 np0005592159.novalocal sudo[7943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubdhcxxzqudvfusjhvaeusxqqasgusr ; /usr/bin/python3'
Jan 22 13:00:17 np0005592159.novalocal sudo[7943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:17 np0005592159.novalocal python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592159.novalocal sudo[7943]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:18 np0005592159.novalocal sudo[7971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toapfoqttymboufovaqobegecqaitbze ; /usr/bin/python3'
Jan 22 13:00:18 np0005592159.novalocal sudo[7971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592159.novalocal python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592159.novalocal sudo[7971]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:18 np0005592159.novalocal sudo[7999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbcrakhiolauisueqmrzrrpbmjbfflxb ; /usr/bin/python3'
Jan 22 13:00:18 np0005592159.novalocal sudo[7999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592159.novalocal python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592159.novalocal sudo[7999]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:18 np0005592159.novalocal sudo[8027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyjbikxlhsavvwyrrikckcfdxdoxgiht ; /usr/bin/python3'
Jan 22 13:00:18 np0005592159.novalocal sudo[8027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:18 np0005592159.novalocal python3[8029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:18 np0005592159.novalocal sudo[8027]: pam_unix(sudo:session): session closed for user root
Jan 22 13:00:19 np0005592159.novalocal python3[8056]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca7-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:00:19 np0005592159.novalocal python3[8086]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 13:00:22 np0005592159.novalocal sshd-session[7503]: Connection closed by 38.102.83.114 port 38850
Jan 22 13:00:22 np0005592159.novalocal sshd-session[7500]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:00:22 np0005592159.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 22 13:00:22 np0005592159.novalocal systemd[1]: session-5.scope: Consumed 4.584s CPU time.
Jan 22 13:00:22 np0005592159.novalocal systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Jan 22 13:00:22 np0005592159.novalocal systemd-logind[787]: Removed session 5.
Jan 22 13:00:24 np0005592159.novalocal sshd-session[8091]: Accepted publickey for zuul from 38.102.83.114 port 44716 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:00:24 np0005592159.novalocal systemd-logind[787]: New session 6 of user zuul.
Jan 22 13:00:24 np0005592159.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 22 13:00:24 np0005592159.novalocal sshd-session[8091]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:00:24 np0005592159.novalocal sudo[8118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svuvrygrdcsymhuonjwnokkidiinqbez ; /usr/bin/python3'
Jan 22 13:00:24 np0005592159.novalocal sudo[8118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:00:25 np0005592159.novalocal python3[8120]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 13:00:31 np0005592159.novalocal setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 22 13:00:31 np0005592159.novalocal setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  Converting 383 SID table entries...
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:00:45 np0005592159.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:01:01 np0005592159.novalocal CROND[8191]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 13:01:01 np0005592159.novalocal run-parts[8194]: (/etc/cron.hourly) starting 0anacron
Jan 22 13:01:01 np0005592159.novalocal anacron[8202]: Anacron started on 2026-01-22
Jan 22 13:01:01 np0005592159.novalocal anacron[8202]: Will run job `cron.daily' in 8 min.
Jan 22 13:01:01 np0005592159.novalocal anacron[8202]: Will run job `cron.weekly' in 28 min.
Jan 22 13:01:01 np0005592159.novalocal anacron[8202]: Will run job `cron.monthly' in 48 min.
Jan 22 13:01:01 np0005592159.novalocal anacron[8202]: Jobs will be executed sequentially
Jan 22 13:01:01 np0005592159.novalocal run-parts[8204]: (/etc/cron.hourly) finished 0anacron
Jan 22 13:01:01 np0005592159.novalocal CROND[8190]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  Converting 387 SID table entries...
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:01:02 np0005592159.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:01:21 np0005592159.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 13:01:21 np0005592159.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:01:21 np0005592159.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:01:21 np0005592159.novalocal systemd[1]: Reloading.
Jan 22 13:01:21 np0005592159.novalocal systemd-rc-local-generator[8945]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:01:21 np0005592159.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:01:24 np0005592159.novalocal sudo[8118]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:25 np0005592159.novalocal python3[10659]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-af35-cd98-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:01:26 np0005592159.novalocal kernel: evm: overlay not supported
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: Starting D-Bus User Message Bus...
Jan 22 13:01:26 np0005592159.novalocal dbus-broker-launch[11936]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 22 13:01:26 np0005592159.novalocal dbus-broker-launch[11936]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: Started D-Bus User Message Bus.
Jan 22 13:01:26 np0005592159.novalocal dbus-broker-lau[11936]: Ready
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: Created slice Slice /user.
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: podman-11817.scope: unit configures an IP firewall, but not running as root.
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: (This warning is only shown for the first unit using IP firewalling.)
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: Started podman-11817.scope.
Jan 22 13:01:26 np0005592159.novalocal systemd[4305]: Started podman-pause-3b1c51bd.scope.
Jan 22 13:01:27 np0005592159.novalocal sudo[12734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztanewobzkfteioxdwdodxpgcrvnotdr ; /usr/bin/python3'
Jan 22 13:01:27 np0005592159.novalocal sudo[12734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:27 np0005592159.novalocal python3[12760]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.194:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.194:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:01:27 np0005592159.novalocal python3[12760]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 22 13:01:27 np0005592159.novalocal sudo[12734]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:28 np0005592159.novalocal sshd-session[8094]: Connection closed by 38.102.83.114 port 44716
Jan 22 13:01:28 np0005592159.novalocal sshd-session[8091]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:01:28 np0005592159.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Jan 22 13:01:28 np0005592159.novalocal systemd[1]: session-6.scope: Consumed 47.874s CPU time.
Jan 22 13:01:28 np0005592159.novalocal systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Jan 22 13:01:28 np0005592159.novalocal systemd-logind[787]: Removed session 6.
Jan 22 13:01:49 np0005592159.novalocal sshd-session[20179]: Unable to negotiate with 38.102.83.41 port 40416: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 22 13:01:49 np0005592159.novalocal sshd-session[20185]: Connection closed by 38.102.83.41 port 40394 [preauth]
Jan 22 13:01:49 np0005592159.novalocal sshd-session[20182]: Connection closed by 38.102.83.41 port 40406 [preauth]
Jan 22 13:01:49 np0005592159.novalocal sshd-session[20184]: Unable to negotiate with 38.102.83.41 port 40422: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 22 13:01:49 np0005592159.novalocal sshd-session[20187]: Unable to negotiate with 38.102.83.41 port 40426: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 22 13:01:50 np0005592159.novalocal sshd-session[20213]: Invalid user sol from 45.148.10.240 port 48990
Jan 22 13:01:50 np0005592159.novalocal sshd-session[20213]: Connection closed by invalid user sol 45.148.10.240 port 48990 [preauth]
Jan 22 13:01:54 np0005592159.novalocal sshd-session[21378]: Accepted publickey for zuul from 38.102.83.114 port 40828 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:01:54 np0005592159.novalocal systemd-logind[787]: New session 7 of user zuul.
Jan 22 13:01:54 np0005592159.novalocal systemd[1]: Started Session 7 of User zuul.
Jan 22 13:01:54 np0005592159.novalocal sshd-session[21378]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:01:54 np0005592159.novalocal python3[21468]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 13:01:54 np0005592159.novalocal sudo[21703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efulharebkrabgzxdhzgjrujmddbxxck ; /usr/bin/python3'
Jan 22 13:01:54 np0005592159.novalocal sudo[21703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:55 np0005592159.novalocal python3[21711]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 13:01:55 np0005592159.novalocal sudo[21703]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:55 np0005592159.novalocal irqbalance[785]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 22 13:01:55 np0005592159.novalocal irqbalance[785]: IRQ 27 affinity is now unmanaged
Jan 22 13:01:55 np0005592159.novalocal sudo[22035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzmkjxlawzdctuoiejojdefxlyboyne ; /usr/bin/python3'
Jan 22 13:01:55 np0005592159.novalocal sudo[22035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:55 np0005592159.novalocal python3[22044]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005592159.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 22 13:01:55 np0005592159.novalocal useradd[22094]: new group: name=cloud-admin, GID=1002
Jan 22 13:01:55 np0005592159.novalocal useradd[22094]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 22 13:01:56 np0005592159.novalocal sudo[22035]: pam_unix(sudo:session): session closed for user root
Jan 22 13:01:59 np0005592159.novalocal sudo[23184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqbiaeaswfhhhjtfqbhuobpzwsmdsdj ; /usr/bin/python3'
Jan 22 13:01:59 np0005592159.novalocal sudo[23184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:01:59 np0005592159.novalocal python3[23191]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 13:01:59 np0005592159.novalocal sudo[23184]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:00 np0005592159.novalocal sudo[23583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehidmtpllqqztrqqigbrmgbguueovgem ; /usr/bin/python3'
Jan 22 13:02:00 np0005592159.novalocal sudo[23583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:02:00 np0005592159.novalocal python3[23592]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:02:00 np0005592159.novalocal sudo[23583]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:01 np0005592159.novalocal sudo[23831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyunvlvtewuavnvpdvwhjxzcfzfbvxaj ; /usr/bin/python3'
Jan 22 13:02:01 np0005592159.novalocal sudo[23831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:02:01 np0005592159.novalocal python3[23837]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086920.377604-170-19221314951872/source _original_basename=tmpswz6jnnk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:02:01 np0005592159.novalocal sudo[23831]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:01 np0005592159.novalocal sudo[24153]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgmnmihcwtgfhtpsaeajpbfhmhipolys ; /usr/bin/python3'
Jan 22 13:02:01 np0005592159.novalocal sudo[24153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:02:02 np0005592159.novalocal python3[24155]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Jan 22 13:02:02 np0005592159.novalocal systemd[1]: Starting Hostname Service...
Jan 22 13:02:02 np0005592159.novalocal systemd[1]: Started Hostname Service.
Jan 22 13:02:02 np0005592159.novalocal systemd-hostnamed[24255]: Changed pretty hostname to 'compute-2'
Jan 22 13:02:02 compute-2 systemd-hostnamed[24255]: Hostname set to <compute-2> (static)
Jan 22 13:02:02 compute-2 NetworkManager[7199]: <info>  [1769086922.2909] hostname: static hostname changed from "np0005592159.novalocal" to "compute-2"
Jan 22 13:02:02 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 13:02:02 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 13:02:02 compute-2 sudo[24153]: pam_unix(sudo:session): session closed for user root
Jan 22 13:02:02 compute-2 sshd-session[21413]: Connection closed by 38.102.83.114 port 40828
Jan 22 13:02:02 compute-2 sshd-session[21378]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:02:02 compute-2 systemd[1]: session-7.scope: Deactivated successfully.
Jan 22 13:02:02 compute-2 systemd[1]: session-7.scope: Consumed 2.323s CPU time.
Jan 22 13:02:02 compute-2 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Jan 22 13:02:02 compute-2 systemd-logind[787]: Removed session 7.
Jan 22 13:02:12 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 13:02:28 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:02:28 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:02:28 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1min 5.501s CPU time.
Jan 22 13:02:28 compute-2 systemd[1]: run-r43094218693f467588d414b5e14fe722.service: Deactivated successfully.
Jan 22 13:02:32 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 13:03:03 compute-2 sshd-session[29950]: Invalid user 1234 from 69.12.83.184 port 48738
Jan 22 13:03:04 compute-2 sshd-session[29950]: Connection closed by invalid user 1234 69.12.83.184 port 48738 [preauth]
Jan 22 13:03:43 compute-2 sshd-session[29953]: error: kex_exchange_identification: read: Connection reset by peer
Jan 22 13:03:43 compute-2 sshd-session[29953]: Connection reset by 176.120.22.52 port 16198
Jan 22 13:04:08 compute-2 sshd-session[29954]: Invalid user sol from 45.148.10.240 port 35556
Jan 22 13:04:09 compute-2 sshd-session[29954]: Connection closed by invalid user sol 45.148.10.240 port 35556 [preauth]
Jan 22 13:04:25 compute-2 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 22 13:04:26 compute-2 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 22 13:04:26 compute-2 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 22 13:04:26 compute-2 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 22 13:06:26 compute-2 sshd-session[29962]: Invalid user sol from 45.148.10.240 port 43676
Jan 22 13:06:26 compute-2 sshd-session[29962]: Connection closed by invalid user sol 45.148.10.240 port 43676 [preauth]
Jan 22 13:07:01 compute-2 sshd-session[29964]: Accepted publickey for zuul from 38.102.83.41 port 45620 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:07:01 compute-2 systemd-logind[787]: New session 8 of user zuul.
Jan 22 13:07:01 compute-2 systemd[1]: Started Session 8 of User zuul.
Jan 22 13:07:01 compute-2 sshd-session[29964]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:07:01 compute-2 python3[30040]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:07:03 compute-2 sudo[30154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxdoweeytrdsarqlumefjtkdrrdbtjnv ; /usr/bin/python3'
Jan 22 13:07:03 compute-2 sudo[30154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:03 compute-2 python3[30156]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:03 compute-2 sudo[30154]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:03 compute-2 sudo[30227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptuuhqguplxbsyjdcbcausggymvxwwrb ; /usr/bin/python3'
Jan 22 13:07:03 compute-2 sudo[30227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:03 compute-2 python3[30229]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:03 compute-2 sudo[30227]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:03 compute-2 sudo[30253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbupwqptdksnzyvifrjfojgugufwuhuj ; /usr/bin/python3'
Jan 22 13:07:03 compute-2 sudo[30253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:04 compute-2 python3[30255]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:04 compute-2 sudo[30253]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:04 compute-2 sudo[30326]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzmhkjgujsiuonqswthobquiraejaye ; /usr/bin/python3'
Jan 22 13:07:04 compute-2 sudo[30326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:04 compute-2 python3[30328]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:04 compute-2 sudo[30326]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:04 compute-2 sudo[30352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dywjcwukkornrxrgxlujipduvsxzrmwd ; /usr/bin/python3'
Jan 22 13:07:04 compute-2 sudo[30352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:04 compute-2 python3[30354]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:04 compute-2 sudo[30352]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:04 compute-2 sudo[30425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxirtdsfbbwbaegejficlcahjyissmys ; /usr/bin/python3'
Jan 22 13:07:04 compute-2 sudo[30425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-2 python3[30427]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:05 compute-2 sudo[30425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-2 sudo[30451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexkinolvxdzkibnaonqmcsvzslvrkcm ; /usr/bin/python3'
Jan 22 13:07:05 compute-2 sudo[30451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-2 python3[30453]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:05 compute-2 sudo[30451]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-2 sudo[30524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeipbghpepvlawasdgpgsjstgnkhpysm ; /usr/bin/python3'
Jan 22 13:07:05 compute-2 sudo[30524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-2 python3[30526]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:05 compute-2 sudo[30524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:05 compute-2 sudo[30550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kigivukxdodbgcgsqqwxnqolsgaktegl ; /usr/bin/python3'
Jan 22 13:07:05 compute-2 sudo[30550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:05 compute-2 python3[30552]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:05 compute-2 sudo[30550]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:06 compute-2 sudo[30623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdcebmdpdatgsluczwsdkggxcnktnaym ; /usr/bin/python3'
Jan 22 13:07:06 compute-2 sudo[30623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:06 compute-2 python3[30625]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:06 compute-2 sudo[30623]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:06 compute-2 sudo[30649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtxngvpmsilpsjaytxhcsdrwpwqzutpl ; /usr/bin/python3'
Jan 22 13:07:06 compute-2 sudo[30649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:06 compute-2 python3[30651]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:06 compute-2 sudo[30649]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:06 compute-2 sudo[30722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogactqvwufdsvmosuizbtrixyjorlaeh ; /usr/bin/python3'
Jan 22 13:07:06 compute-2 sudo[30722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:06 compute-2 python3[30724]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:06 compute-2 sudo[30722]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:06 compute-2 sudo[30748]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwldrnmvvbugczkhrqznapexrheahnzq ; /usr/bin/python3'
Jan 22 13:07:06 compute-2 sudo[30748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:07 compute-2 python3[30750]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:07:07 compute-2 sudo[30748]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:07 compute-2 sudo[30821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krjvwuoammlvatgmeysmklhvfftthyug ; /usr/bin/python3'
Jan 22 13:07:07 compute-2 sudo[30821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:07:07 compute-2 python3[30823]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:07:07 compute-2 sudo[30821]: pam_unix(sudo:session): session closed for user root
Jan 22 13:07:20 compute-2 python3[30871]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:08:45 compute-2 sshd-session[30874]: Invalid user sol from 45.148.10.240 port 40750
Jan 22 13:08:45 compute-2 sshd-session[30874]: Connection closed by invalid user sol 45.148.10.240 port 40750 [preauth]
Jan 22 13:09:01 compute-2 anacron[8202]: Job `cron.daily' started
Jan 22 13:09:01 compute-2 anacron[8202]: Job `cron.daily' terminated
Jan 22 13:10:18 compute-2 sshd-session[30878]: Connection closed by 195.177.94.68 port 56958
Jan 22 13:10:18 compute-2 sshd-session[30879]: Connection closed by authenticating user root 195.177.94.68 port 56968 [preauth]
Jan 22 13:10:24 compute-2 sshd-session[30881]: Invalid user pf from 69.12.83.184 port 51786
Jan 22 13:10:24 compute-2 sshd-session[30881]: Connection closed by invalid user pf 69.12.83.184 port 51786 [preauth]
Jan 22 13:10:58 compute-2 sshd-session[30883]: Invalid user sol from 45.148.10.240 port 60868
Jan 22 13:10:58 compute-2 sshd-session[30883]: Connection closed by invalid user sol 45.148.10.240 port 60868 [preauth]
Jan 22 13:12:20 compute-2 sshd-session[29967]: Received disconnect from 38.102.83.41 port 45620:11: disconnected by user
Jan 22 13:12:20 compute-2 sshd-session[29967]: Disconnected from user zuul 38.102.83.41 port 45620
Jan 22 13:12:20 compute-2 sshd-session[29964]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:12:20 compute-2 systemd[1]: session-8.scope: Deactivated successfully.
Jan 22 13:12:20 compute-2 systemd[1]: session-8.scope: Consumed 4.750s CPU time.
Jan 22 13:12:20 compute-2 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Jan 22 13:12:20 compute-2 systemd-logind[787]: Removed session 8.
Jan 22 13:12:47 compute-2 sshd-session[30888]: Connection closed by 69.12.83.184 port 52386
Jan 22 13:13:11 compute-2 sshd-session[30890]: Invalid user sol from 45.148.10.240 port 41940
Jan 22 13:13:11 compute-2 sshd-session[30890]: Connection closed by invalid user sol 45.148.10.240 port 41940 [preauth]
Jan 22 13:14:37 compute-2 sshd-session[30892]: Invalid user cisco from 69.12.83.184 port 53320
Jan 22 13:14:37 compute-2 sshd-session[30892]: Connection closed by invalid user cisco 69.12.83.184 port 53320 [preauth]
Jan 22 13:15:28 compute-2 sshd-session[30894]: Invalid user sol from 45.148.10.240 port 45124
Jan 22 13:15:28 compute-2 sshd-session[30894]: Connection closed by invalid user sol 45.148.10.240 port 45124 [preauth]
Jan 22 13:17:43 compute-2 sshd-session[30898]: Invalid user funded from 45.148.10.240 port 53850
Jan 22 13:17:43 compute-2 sshd-session[30898]: Connection closed by invalid user funded 45.148.10.240 port 53850 [preauth]
Jan 22 13:18:37 compute-2 sshd-session[30901]: Invalid user client from 69.12.83.184 port 54722
Jan 22 13:18:37 compute-2 sshd-session[30901]: Connection closed by invalid user client 69.12.83.184 port 54722 [preauth]
Jan 22 13:20:01 compute-2 sshd-session[30904]: Invalid user sol from 45.148.10.240 port 34902
Jan 22 13:20:01 compute-2 sshd-session[30904]: Connection closed by invalid user sol 45.148.10.240 port 34902 [preauth]
Jan 22 13:21:49 compute-2 sshd-session[30906]: Connection closed by 69.12.83.184 port 56080 [preauth]
Jan 22 13:21:58 compute-2 sshd-session[30908]: Accepted publickey for zuul from 192.168.122.30 port 44286 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:21:58 compute-2 systemd-logind[787]: New session 9 of user zuul.
Jan 22 13:21:58 compute-2 systemd[1]: Started Session 9 of User zuul.
Jan 22 13:21:58 compute-2 sshd-session[30908]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:21:59 compute-2 python3.9[31061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:00 compute-2 sudo[31240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llkcnocvfmkgzkauvycjhfruqvfefqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088120.187346-60-230494309292768/AnsiballZ_command.py'
Jan 22 13:22:00 compute-2 sudo[31240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:00 compute-2 python3.9[31242]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:22:10 compute-2 sudo[31240]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:11 compute-2 sshd-session[30911]: Connection closed by 192.168.122.30 port 44286
Jan 22 13:22:11 compute-2 sshd-session[30908]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:22:11 compute-2 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Jan 22 13:22:11 compute-2 systemd[1]: session-9.scope: Deactivated successfully.
Jan 22 13:22:11 compute-2 systemd[1]: session-9.scope: Consumed 8.010s CPU time.
Jan 22 13:22:11 compute-2 systemd-logind[787]: Removed session 9.
Jan 22 13:22:14 compute-2 sshd-session[31301]: Invalid user admin from 69.12.83.184 port 56372
Jan 22 13:22:14 compute-2 sshd-session[31301]: Connection closed by invalid user admin 69.12.83.184 port 56372 [preauth]
Jan 22 13:22:22 compute-2 sshd-session[31304]: Invalid user sol from 45.148.10.240 port 40420
Jan 22 13:22:22 compute-2 sshd-session[31304]: Connection closed by invalid user sol 45.148.10.240 port 40420 [preauth]
Jan 22 13:22:26 compute-2 sshd-session[31306]: Accepted publickey for zuul from 192.168.122.30 port 37138 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:22:26 compute-2 systemd-logind[787]: New session 10 of user zuul.
Jan 22 13:22:26 compute-2 systemd[1]: Started Session 10 of User zuul.
Jan 22 13:22:26 compute-2 sshd-session[31306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:22:27 compute-2 python3.9[31459]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 13:22:29 compute-2 python3.9[31633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:29 compute-2 sudo[31783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrzvceivavxkzncdxrlakzrkqjkirbux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088149.4428687-95-217649330083793/AnsiballZ_command.py'
Jan 22 13:22:29 compute-2 sudo[31783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:30 compute-2 python3.9[31785]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:22:30 compute-2 sudo[31783]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:31 compute-2 sudo[31936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewodtznbjdvuryccnyngryblsfhepily ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088150.5678098-131-209915045673645/AnsiballZ_stat.py'
Jan 22 13:22:31 compute-2 sudo[31936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:31 compute-2 python3.9[31938]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:22:31 compute-2 sudo[31936]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:31 compute-2 sudo[32088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-himccsjudvqbnimzdxykvcbmmzcuclcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088151.5437603-156-169845728413337/AnsiballZ_file.py'
Jan 22 13:22:31 compute-2 sudo[32088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:32 compute-2 python3.9[32090]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:22:32 compute-2 sudo[32088]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:32 compute-2 sudo[32240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieticdmhvfmxyepqryiuzryaqqsyrjkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088152.4649065-180-93825057625088/AnsiballZ_stat.py'
Jan 22 13:22:32 compute-2 sudo[32240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:32 compute-2 python3.9[32242]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:22:32 compute-2 sudo[32240]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:33 compute-2 sudo[32363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yppxyncanlqpxwqvfijtoonlvqdptdnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088152.4649065-180-93825057625088/AnsiballZ_copy.py'
Jan 22 13:22:33 compute-2 sudo[32363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:33 compute-2 python3.9[32365]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088152.4649065-180-93825057625088/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:22:33 compute-2 sudo[32363]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:34 compute-2 sudo[32515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmxkvzpffexjxujbzfjlkjhgeyahxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088153.9413967-224-99517912188363/AnsiballZ_setup.py'
Jan 22 13:22:34 compute-2 sudo[32515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:34 compute-2 python3.9[32517]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:34 compute-2 sudo[32515]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:35 compute-2 sudo[32671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyjiuwpxaznbpvjxyjwkegpnbibuuwam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088154.9826655-249-111141615486846/AnsiballZ_file.py'
Jan 22 13:22:35 compute-2 sudo[32671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:35 compute-2 python3.9[32673]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:22:35 compute-2 sudo[32671]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:36 compute-2 sudo[32823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajztuwemaimxchbubawcpxivsxaaavto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088155.7838254-275-41556259567674/AnsiballZ_file.py'
Jan 22 13:22:36 compute-2 sudo[32823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:36 compute-2 python3.9[32825]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:22:36 compute-2 sudo[32823]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:37 compute-2 python3.9[32975]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:22:42 compute-2 python3.9[33228]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:22:42 compute-2 python3.9[33378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:44 compute-2 python3.9[33533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:22:45 compute-2 sudo[33689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uudzfqmeaefecllozsxyurekoglyzqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088164.9783285-420-149034428497209/AnsiballZ_setup.py'
Jan 22 13:22:45 compute-2 sudo[33689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:45 compute-2 python3.9[33691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:22:45 compute-2 sudo[33689]: pam_unix(sudo:session): session closed for user root
Jan 22 13:22:46 compute-2 sudo[33773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parvifqnwncebmrqzatiikztfarwowex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088164.9783285-420-149034428497209/AnsiballZ_dnf.py'
Jan 22 13:22:46 compute-2 sudo[33773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:22:46 compute-2 python3.9[33775]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:23:05 compute-2 sshd-session[33379]: error: kex_exchange_identification: read: Connection reset by peer
Jan 22 13:23:05 compute-2 sshd-session[33379]: Connection reset by 69.12.83.184 port 56610
Jan 22 13:23:20 compute-2 sshd-session[33900]: Invalid user user from 45.148.10.121 port 50952
Jan 22 13:23:20 compute-2 sshd-session[33900]: Connection closed by invalid user user 45.148.10.121 port 50952 [preauth]
Jan 22 13:23:27 compute-2 sshd-session[33845]: Connection closed by 69.12.83.184 port 56766
Jan 22 13:23:42 compute-2 systemd[1]: Reloading.
Jan 22 13:23:42 compute-2 systemd-rc-local-generator[33975]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:23:42 compute-2 systemd[1]: Starting dnf makecache...
Jan 22 13:23:42 compute-2 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 22 13:23:43 compute-2 dnf[33988]: Failed determining last makecache time.
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-barbican-42b4c41831408a8e323 141 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 164 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-cinder-1c00d6490d88e436f26ef 176 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-python-stevedore-c4acc5639fd2329372142 154 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-python-cloudkitty-tests-tempest-2c80f8 154 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-os-refresh-config-9bfc52b5049be2d8de61 171 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 158 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-python-designate-tests-tempest-347fdbc 162 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 systemd[1]: Reloading.
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-glance-1fd12c29b339f30fe823e 152 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 171 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-manila-3c01b7181572c95dac462 155 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 systemd-rc-local-generator[34033]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-python-whitebox-neutron-tests-tempest-  94 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-octavia-ba397f07a7331190208c 115 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-watcher-c014f81a8647287f6dcc 150 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-ansible-config_template-5ccaa22121a7ff 152 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 151 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-swift-dc98a8463506ac520c469a 166 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-python-tempestconf-8515371b7cceebd4282 104 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 dnf[33988]: delorean-openstack-heat-ui-013accbfd179753bc3f0 101 kB/s | 3.0 kB     00:00
Jan 22 13:23:43 compute-2 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 22 13:23:43 compute-2 systemd[1]: Reloading.
Jan 22 13:23:43 compute-2 systemd-rc-local-generator[34083]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:23:43 compute-2 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 22 13:23:43 compute-2 dnf[33988]: CentOS Stream 9 - BaseOS                         29 kB/s | 6.7 kB     00:00
Jan 22 13:23:44 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:23:44 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:23:44 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:23:44 compute-2 dnf[33988]: CentOS Stream 9 - AppStream                      30 kB/s | 6.8 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: CentOS Stream 9 - CRB                            56 kB/s | 6.6 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: CentOS Stream 9 - Extras packages                31 kB/s | 7.3 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: dlrn-antelope-testing                           113 kB/s | 3.0 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: dlrn-antelope-build-deps                        106 kB/s | 3.0 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: centos9-rabbitmq                                 89 kB/s | 3.0 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: centos9-storage                                 103 kB/s | 3.0 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: centos9-opstools                                106 kB/s | 3.0 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: NFV SIG OpenvSwitch                             122 kB/s | 3.0 kB     00:00
Jan 22 13:23:44 compute-2 dnf[33988]: repo-setup-centos-appstream                     193 kB/s | 4.4 kB     00:00
Jan 22 13:23:45 compute-2 dnf[33988]: repo-setup-centos-baseos                        157 kB/s | 3.9 kB     00:00
Jan 22 13:23:45 compute-2 dnf[33988]: repo-setup-centos-highavailability              145 kB/s | 3.9 kB     00:00
Jan 22 13:23:45 compute-2 dnf[33988]: repo-setup-centos-powertools                    171 kB/s | 4.3 kB     00:00
Jan 22 13:23:45 compute-2 dnf[33988]: Extra Packages for Enterprise Linux 9 - x86_64  208 kB/s |  25 kB     00:00
Jan 22 13:23:45 compute-2 dnf[33988]: Metadata cache created.
Jan 22 13:23:46 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 13:23:46 compute-2 systemd[1]: Finished dnf makecache.
Jan 22 13:23:46 compute-2 systemd[1]: dnf-makecache.service: Consumed 1.983s CPU time.
Jan 22 13:24:37 compute-2 sshd-session[34309]: Invalid user sol from 45.148.10.240 port 41336
Jan 22 13:24:37 compute-2 sshd-session[34309]: Connection closed by invalid user sol 45.148.10.240 port 41336 [preauth]
Jan 22 13:24:55 compute-2 kernel: SELinux:  Converting 2723 SID table entries...
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:24:55 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:24:56 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 22 13:24:56 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:24:56 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:24:56 compute-2 systemd[1]: Reloading.
Jan 22 13:24:56 compute-2 systemd-rc-local-generator[34456]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:24:56 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:24:56 compute-2 sudo[33773]: pam_unix(sudo:session): session closed for user root
Jan 22 13:24:57 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:24:57 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:24:57 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.069s CPU time.
Jan 22 13:24:57 compute-2 systemd[1]: run-re6a8c645af0a4cf0be66481f23587e9d.service: Deactivated successfully.
Jan 22 13:24:57 compute-2 sudo[35369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwoxjnxctozxglivvsyctwmpdkuepssw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088297.2937567-457-78945294396291/AnsiballZ_command.py'
Jan 22 13:24:57 compute-2 sudo[35369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:24:57 compute-2 python3.9[35371]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:24:58 compute-2 sudo[35369]: pam_unix(sudo:session): session closed for user root
Jan 22 13:24:59 compute-2 sudo[35650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyliqsaamdnoygtlixpkqsafoahtjokt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088298.979046-479-23115287921017/AnsiballZ_selinux.py'
Jan 22 13:24:59 compute-2 sudo[35650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:24:59 compute-2 python3.9[35652]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 13:24:59 compute-2 sudo[35650]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:00 compute-2 sudo[35802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lauoeryaaohgqxkakxvxifvcmoezwsso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088300.4557338-513-62570864616704/AnsiballZ_command.py'
Jan 22 13:25:00 compute-2 sudo[35802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:00 compute-2 python3.9[35804]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 13:25:01 compute-2 sudo[35802]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:03 compute-2 sudo[35955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebidqisqjvddzbifazlmzntctprmhywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088303.5429711-536-215950997086261/AnsiballZ_file.py'
Jan 22 13:25:03 compute-2 sudo[35955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:05 compute-2 python3.9[35957]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:25:05 compute-2 sudo[35955]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:06 compute-2 sudo[36107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdbdzgefusgseqgpumoziejyyvieiwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088305.9497848-561-157024446266492/AnsiballZ_mount.py'
Jan 22 13:25:06 compute-2 sudo[36107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:10 compute-2 python3.9[36109]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 13:25:10 compute-2 sudo[36107]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:11 compute-2 sudo[36259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxrmctwqedujcwepvyiqqbgxuxybhom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088311.1984954-645-22598170946700/AnsiballZ_file.py'
Jan 22 13:25:11 compute-2 sudo[36259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:11 compute-2 python3.9[36261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:11 compute-2 sudo[36259]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:12 compute-2 sudo[36411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnazhtdpdiobxmvjbfpjyajcfaprwjez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088311.9978292-669-254647316206721/AnsiballZ_stat.py'
Jan 22 13:25:12 compute-2 sudo[36411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:12 compute-2 python3.9[36413]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:25:12 compute-2 sudo[36411]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:13 compute-2 sudo[36534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrygmdrehektkprobvrfuewjlczhyvqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088311.9978292-669-254647316206721/AnsiballZ_copy.py'
Jan 22 13:25:13 compute-2 sudo[36534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:13 compute-2 python3.9[36536]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088311.9978292-669-254647316206721/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:25:13 compute-2 sudo[36534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:14 compute-2 sudo[36686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zugsfiqgqsuouqbqgwstjiaynbjqsduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088314.315853-741-205176094460042/AnsiballZ_stat.py'
Jan 22 13:25:14 compute-2 sudo[36686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:14 compute-2 python3.9[36688]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:25:14 compute-2 sudo[36686]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:15 compute-2 sudo[36838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcfvdvibnlzqvdihtsczhkhvuopyatcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088315.1107237-764-120113442195326/AnsiballZ_command.py'
Jan 22 13:25:15 compute-2 sudo[36838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:15 compute-2 python3.9[36840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:15 compute-2 sudo[36838]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:16 compute-2 sudo[36991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqlhoxlhroylpcoljokmajuoloqlbngh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088315.8567486-788-80792238163454/AnsiballZ_file.py'
Jan 22 13:25:16 compute-2 sudo[36991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:16 compute-2 python3.9[36993]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:25:16 compute-2 sudo[36991]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:17 compute-2 sudo[37143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blfudxuquzihkflppglmeluxiuacbsjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088316.9946446-822-103271672456357/AnsiballZ_getent.py'
Jan 22 13:25:17 compute-2 sudo[37143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:17 compute-2 python3.9[37145]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 13:25:17 compute-2 sudo[37143]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:17 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:25:17 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:25:18 compute-2 sudo[37297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ancbbymvqdpjowhtakmjvzzqcicgsiyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088318.0099487-846-192223349590128/AnsiballZ_group.py'
Jan 22 13:25:18 compute-2 sudo[37297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:18 compute-2 python3.9[37299]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:25:18 compute-2 groupadd[37300]: group added to /etc/group: name=qemu, GID=107
Jan 22 13:25:18 compute-2 groupadd[37300]: group added to /etc/gshadow: name=qemu
Jan 22 13:25:18 compute-2 groupadd[37300]: new group: name=qemu, GID=107
Jan 22 13:25:18 compute-2 sudo[37297]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:19 compute-2 sudo[37455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siabgwwxxomtkwgwbjlmtkjukrbvpiwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088319.274962-869-72666282309291/AnsiballZ_user.py'
Jan 22 13:25:19 compute-2 sudo[37455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:20 compute-2 python3.9[37457]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:25:20 compute-2 useradd[37459]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 22 13:25:20 compute-2 sudo[37455]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:20 compute-2 sudo[37615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daebdldgdfdxghmejlnivjyprzqksfsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088320.5413249-894-242150893952034/AnsiballZ_getent.py'
Jan 22 13:25:20 compute-2 sudo[37615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:21 compute-2 python3.9[37617]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 13:25:21 compute-2 sudo[37615]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:21 compute-2 sudo[37768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdrjwsssqohorenjzueseptagxkccwbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088321.3103008-918-114880315256411/AnsiballZ_group.py'
Jan 22 13:25:21 compute-2 sudo[37768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:21 compute-2 python3.9[37770]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:25:21 compute-2 groupadd[37771]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 22 13:25:21 compute-2 groupadd[37771]: group added to /etc/gshadow: name=hugetlbfs
Jan 22 13:25:21 compute-2 groupadd[37771]: new group: name=hugetlbfs, GID=42477
Jan 22 13:25:21 compute-2 sudo[37768]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:22 compute-2 sudo[37926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qierctcvhlxpwuhavxadlpozayhzzozc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088322.1988068-945-101727302004790/AnsiballZ_file.py'
Jan 22 13:25:22 compute-2 sudo[37926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:22 compute-2 python3.9[37928]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 13:25:22 compute-2 sudo[37926]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:23 compute-2 sudo[38078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fienxdmsedhoebmjtmshttkpebxoayvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088323.2564118-978-252073855615560/AnsiballZ_dnf.py'
Jan 22 13:25:23 compute-2 sudo[38078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:23 compute-2 python3.9[38080]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:25:25 compute-2 sudo[38078]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:29 compute-2 sudo[38232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijtmllnelglkscwamzyxkycqymjittzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088329.0016801-1002-22282620340011/AnsiballZ_file.py'
Jan 22 13:25:29 compute-2 sudo[38232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:29 compute-2 python3.9[38234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:29 compute-2 sudo[38232]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:29 compute-2 sudo[38384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttbynftnhasxaiderkbofqumpluuqkvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088329.702185-1026-74188786084060/AnsiballZ_stat.py'
Jan 22 13:25:29 compute-2 sudo[38384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:30 compute-2 python3.9[38386]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:25:30 compute-2 sudo[38384]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:30 compute-2 sudo[38507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwlprykexntlmkkrptjomnribgbqzbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088329.702185-1026-74188786084060/AnsiballZ_copy.py'
Jan 22 13:25:30 compute-2 sudo[38507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:30 compute-2 python3.9[38509]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088329.702185-1026-74188786084060/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:30 compute-2 sudo[38507]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:31 compute-2 sudo[38659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwqcdeuuvcbrmovdrckinngoepeyoab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088331.0594018-1071-192700522019031/AnsiballZ_systemd.py'
Jan 22 13:25:31 compute-2 sudo[38659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:32 compute-2 python3.9[38661]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:25:32 compute-2 systemd[1]: Starting Load Kernel Modules...
Jan 22 13:25:32 compute-2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 22 13:25:32 compute-2 kernel: Bridge firewalling registered
Jan 22 13:25:32 compute-2 systemd-modules-load[38665]: Inserted module 'br_netfilter'
Jan 22 13:25:32 compute-2 systemd[1]: Finished Load Kernel Modules.
Jan 22 13:25:32 compute-2 sudo[38659]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:32 compute-2 sudo[38819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trptmjohxhgewfazxvmzkkmbtfyzbmat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088332.5190475-1095-28453236044137/AnsiballZ_stat.py'
Jan 22 13:25:32 compute-2 sudo[38819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:33 compute-2 python3.9[38821]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:25:33 compute-2 sudo[38819]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:33 compute-2 sudo[38942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdkjjojnutdoguuooicvmgdvueyzrcsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088332.5190475-1095-28453236044137/AnsiballZ_copy.py'
Jan 22 13:25:33 compute-2 sudo[38942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:33 compute-2 python3.9[38944]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088332.5190475-1095-28453236044137/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:25:33 compute-2 sudo[38942]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:34 compute-2 sudo[39094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmnyplqnxjnvoblmwyjokzfusfvvgnpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088334.326966-1148-129053713006967/AnsiballZ_dnf.py'
Jan 22 13:25:34 compute-2 sudo[39094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:34 compute-2 python3.9[39096]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:25:37 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:25:38 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:25:38 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:25:38 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:25:38 compute-2 systemd[1]: Reloading.
Jan 22 13:25:38 compute-2 systemd-rc-local-generator[39158]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:25:38 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:25:39 compute-2 sudo[39094]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:40 compute-2 python3.9[41435]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:25:41 compute-2 python3.9[42468]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 13:25:42 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:25:42 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:25:42 compute-2 systemd[1]: man-db-cache-update.service: Consumed 4.921s CPU time.
Jan 22 13:25:42 compute-2 systemd[1]: run-r5db5ed034bd64228832cc77fe1b394c9.service: Deactivated successfully.
Jan 22 13:25:42 compute-2 python3.9[43110]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:25:43 compute-2 sudo[43262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppcwvhrnfdkwtblhgdofhztjuuovrhwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088342.9294477-1266-72455866965911/AnsiballZ_command.py'
Jan 22 13:25:43 compute-2 sudo[43262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:43 compute-2 python3.9[43264]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:43 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 13:25:43 compute-2 systemd[1]: Starting Authorization Manager...
Jan 22 13:25:43 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 13:25:43 compute-2 polkitd[43481]: Started polkitd version 0.117
Jan 22 13:25:43 compute-2 polkitd[43481]: Loading rules from directory /etc/polkit-1/rules.d
Jan 22 13:25:43 compute-2 polkitd[43481]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 22 13:25:43 compute-2 polkitd[43481]: Finished loading, compiling and executing 2 rules
Jan 22 13:25:43 compute-2 systemd[1]: Started Authorization Manager.
Jan 22 13:25:43 compute-2 polkitd[43481]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 22 13:25:43 compute-2 sudo[43262]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:44 compute-2 sudo[43649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvabruzyywztfqjosjvoxawsrhbgctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088344.5837042-1293-210848738231503/AnsiballZ_systemd.py'
Jan 22 13:25:44 compute-2 sudo[43649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:45 compute-2 python3.9[43651]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:25:45 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 13:25:45 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 13:25:45 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 13:25:45 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 13:25:45 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 13:25:45 compute-2 sudo[43649]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:46 compute-2 python3.9[43812]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 13:25:50 compute-2 sudo[43962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmdzqanyawzgikclzrakczhritnoycwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088349.8600876-1464-249033481673798/AnsiballZ_systemd.py'
Jan 22 13:25:50 compute-2 sudo[43962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:50 compute-2 python3.9[43964]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:25:50 compute-2 systemd[1]: Reloading.
Jan 22 13:25:50 compute-2 systemd-rc-local-generator[43995]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:25:50 compute-2 sudo[43962]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:51 compute-2 sudo[44152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjmhknbgykcesjtudocclyrxivhqquif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088351.0969348-1464-27633371058016/AnsiballZ_systemd.py'
Jan 22 13:25:51 compute-2 sudo[44152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:51 compute-2 python3.9[44154]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:25:51 compute-2 systemd[1]: Reloading.
Jan 22 13:25:51 compute-2 systemd-rc-local-generator[44181]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:25:52 compute-2 sudo[44152]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:52 compute-2 sudo[44341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfnvnvnuwpcflzwegtpsxgenhjtxeiur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088352.6238112-1512-194816742812581/AnsiballZ_command.py'
Jan 22 13:25:52 compute-2 sudo[44341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:53 compute-2 python3.9[44343]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:53 compute-2 sudo[44341]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:53 compute-2 sudo[44494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqqvmyahggnwqdsekpvbjpndmzwqtjcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088353.4252818-1536-185487299738039/AnsiballZ_command.py'
Jan 22 13:25:53 compute-2 sudo[44494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:53 compute-2 python3.9[44496]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:54 compute-2 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 22 13:25:54 compute-2 sudo[44494]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:54 compute-2 sudo[44647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xumrprutmycwyxjcvryufmxaurnowvvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088354.238514-1560-94911510262426/AnsiballZ_command.py'
Jan 22 13:25:54 compute-2 sudo[44647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:54 compute-2 python3.9[44649]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:56 compute-2 sudo[44647]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:56 compute-2 sudo[44809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbwpzdgkvtorsmdmhwhpvextgsumfbrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088356.5396452-1584-232893457309603/AnsiballZ_command.py'
Jan 22 13:25:56 compute-2 sudo[44809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:57 compute-2 python3.9[44811]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:25:57 compute-2 sudo[44809]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:57 compute-2 sudo[44962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dohmvwdnxhaptycyjevnhxrokmkiaupt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088357.3504593-1608-218835928821434/AnsiballZ_systemd.py'
Jan 22 13:25:57 compute-2 sudo[44962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:25:57 compute-2 python3.9[44964]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:25:58 compute-2 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 13:25:58 compute-2 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 13:25:58 compute-2 systemd[1]: Stopping Apply Kernel Variables...
Jan 22 13:25:58 compute-2 systemd[1]: Starting Apply Kernel Variables...
Jan 22 13:25:58 compute-2 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 13:25:58 compute-2 systemd[1]: Finished Apply Kernel Variables.
Jan 22 13:25:58 compute-2 sudo[44962]: pam_unix(sudo:session): session closed for user root
Jan 22 13:25:58 compute-2 sshd-session[31309]: Connection closed by 192.168.122.30 port 37138
Jan 22 13:25:58 compute-2 sshd-session[31306]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:25:58 compute-2 systemd[1]: session-10.scope: Deactivated successfully.
Jan 22 13:25:58 compute-2 systemd[1]: session-10.scope: Consumed 2min 16.191s CPU time.
Jan 22 13:25:58 compute-2 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Jan 22 13:25:58 compute-2 systemd-logind[787]: Removed session 10.
Jan 22 13:26:03 compute-2 sshd-session[44994]: Accepted publickey for zuul from 192.168.122.30 port 48262 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:26:03 compute-2 systemd-logind[787]: New session 11 of user zuul.
Jan 22 13:26:03 compute-2 systemd[1]: Started Session 11 of User zuul.
Jan 22 13:26:03 compute-2 sshd-session[44994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:26:04 compute-2 python3.9[45147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:26:05 compute-2 sudo[45301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caoxvepfdiyvrsjuoimbwnfkmxujzjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088365.5298636-70-269684368881205/AnsiballZ_getent.py'
Jan 22 13:26:05 compute-2 sudo[45301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:06 compute-2 python3.9[45303]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 13:26:06 compute-2 sudo[45301]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:06 compute-2 sudo[45454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxheaymwgdqixlleapnaynvutqufyejl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088366.44097-94-274324225134432/AnsiballZ_group.py'
Jan 22 13:26:06 compute-2 sudo[45454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:07 compute-2 python3.9[45456]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:26:07 compute-2 groupadd[45457]: group added to /etc/group: name=openvswitch, GID=42476
Jan 22 13:26:07 compute-2 groupadd[45457]: group added to /etc/gshadow: name=openvswitch
Jan 22 13:26:07 compute-2 groupadd[45457]: new group: name=openvswitch, GID=42476
Jan 22 13:26:07 compute-2 sudo[45454]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:07 compute-2 sudo[45612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkaiecxhrhnmltrgooxqmtxadnhecdwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088367.4483902-118-46959727810137/AnsiballZ_user.py'
Jan 22 13:26:07 compute-2 sudo[45612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:08 compute-2 python3.9[45614]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:26:08 compute-2 useradd[45616]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 22 13:26:08 compute-2 useradd[45616]: add 'openvswitch' to group 'hugetlbfs'
Jan 22 13:26:08 compute-2 useradd[45616]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 22 13:26:08 compute-2 sudo[45612]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:08 compute-2 sudo[45772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htejbwlgfkbrmycmcwlmqsvxtdhrqrwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088368.6689672-148-183003197773985/AnsiballZ_setup.py'
Jan 22 13:26:08 compute-2 sudo[45772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:09 compute-2 python3.9[45774]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:26:09 compute-2 sudo[45772]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:09 compute-2 sudo[45856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzgoahlcjuvgfjhbczeeqprkyhegijlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088368.6689672-148-183003197773985/AnsiballZ_dnf.py'
Jan 22 13:26:09 compute-2 sudo[45856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:10 compute-2 python3.9[45858]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:26:12 compute-2 sudo[45856]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:14 compute-2 sudo[46020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgkcvkzvyyzaqhpymfrknpyeemnrxsyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088374.074164-190-99795244458875/AnsiballZ_dnf.py'
Jan 22 13:26:14 compute-2 sudo[46020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:14 compute-2 python3.9[46022]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:26:26 compute-2 kernel: SELinux:  Converting 2736 SID table entries...
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:26:26 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:26:26 compute-2 groupadd[46045]: group added to /etc/group: name=unbound, GID=994
Jan 22 13:26:26 compute-2 groupadd[46045]: group added to /etc/gshadow: name=unbound
Jan 22 13:26:26 compute-2 groupadd[46045]: new group: name=unbound, GID=994
Jan 22 13:26:26 compute-2 useradd[46052]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 22 13:26:26 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 22 13:26:26 compute-2 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 22 13:26:28 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:26:28 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:26:28 compute-2 systemd[1]: Reloading.
Jan 22 13:26:28 compute-2 systemd-rc-local-generator[46552]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:26:28 compute-2 systemd-sysv-generator[46555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:26:28 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:26:29 compute-2 sudo[46020]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:29 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:26:29 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:26:29 compute-2 systemd[1]: run-r663dc6f62e7b4476a1bec8fc650f28b6.service: Deactivated successfully.
Jan 22 13:26:33 compute-2 sudo[47119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzspmomvgrwjxeqfrmcixvkceaiqlzud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088392.699483-214-158151224871078/AnsiballZ_systemd.py'
Jan 22 13:26:33 compute-2 sudo[47119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:33 compute-2 python3.9[47121]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:26:33 compute-2 systemd[1]: Reloading.
Jan 22 13:26:33 compute-2 systemd-rc-local-generator[47146]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:26:33 compute-2 systemd-sysv-generator[47149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:26:34 compute-2 systemd[1]: Starting Open vSwitch Database Unit...
Jan 22 13:26:34 compute-2 chown[47162]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 22 13:26:34 compute-2 ovs-ctl[47167]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 22 13:26:34 compute-2 ovs-ctl[47167]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 22 13:26:34 compute-2 ovs-ctl[47167]: Starting ovsdb-server [  OK  ]
Jan 22 13:26:34 compute-2 ovs-vsctl[47216]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 22 13:26:34 compute-2 ovs-vsctl[47232]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c4fa18b6-ed0f-47ac-8eec-d1399749aa8e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 22 13:26:34 compute-2 ovs-ctl[47167]: Configuring Open vSwitch system IDs [  OK  ]
Jan 22 13:26:34 compute-2 ovs-ctl[47167]: Enabling remote OVSDB managers [  OK  ]
Jan 22 13:26:34 compute-2 systemd[1]: Started Open vSwitch Database Unit.
Jan 22 13:26:34 compute-2 ovs-vsctl[47241]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Jan 22 13:26:34 compute-2 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 22 13:26:34 compute-2 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 22 13:26:34 compute-2 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 22 13:26:34 compute-2 kernel: openvswitch: Open vSwitch switching datapath
Jan 22 13:26:34 compute-2 ovs-ctl[47285]: Inserting openvswitch module [  OK  ]
Jan 22 13:26:34 compute-2 ovs-ctl[47254]: Starting ovs-vswitchd [  OK  ]
Jan 22 13:26:34 compute-2 ovs-vsctl[47303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Jan 22 13:26:34 compute-2 ovs-ctl[47254]: Enabling remote OVSDB managers [  OK  ]
Jan 22 13:26:34 compute-2 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 22 13:26:34 compute-2 systemd[1]: Starting Open vSwitch...
Jan 22 13:26:34 compute-2 systemd[1]: Finished Open vSwitch.
Jan 22 13:26:34 compute-2 sudo[47119]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:36 compute-2 python3.9[47454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:26:37 compute-2 sudo[47604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckamxshxxqidupowtxumlybcziebjohm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088397.1145515-269-15439841627887/AnsiballZ_sefcontext.py'
Jan 22 13:26:37 compute-2 sudo[47604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:38 compute-2 python3.9[47606]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 13:26:39 compute-2 kernel: SELinux:  Converting 2750 SID table entries...
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:26:39 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:26:39 compute-2 sudo[47604]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:40 compute-2 python3.9[47762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:26:41 compute-2 sudo[47918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yptcbiubeejtkbmfibukvgcexelvqmiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088401.1958141-322-143037470416988/AnsiballZ_dnf.py'
Jan 22 13:26:41 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 22 13:26:41 compute-2 sudo[47918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:41 compute-2 python3.9[47920]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:26:42 compute-2 sudo[47918]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:44 compute-2 sudo[48071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plqxlpmsuaunqecgsobmnenlzutrdfjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088403.588122-347-232500593979364/AnsiballZ_command.py'
Jan 22 13:26:44 compute-2 sudo[48071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:44 compute-2 python3.9[48073]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:26:44 compute-2 sudo[48071]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:45 compute-2 sudo[48358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqqljtegxmasswbsbsewgyirpwhyzwqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088405.279522-371-43592884688875/AnsiballZ_file.py'
Jan 22 13:26:45 compute-2 sudo[48358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:46 compute-2 python3.9[48360]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 13:26:46 compute-2 sudo[48358]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:46 compute-2 python3.9[48510]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:26:47 compute-2 sudo[48662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpjhhztvlpxblpkhcbrgnjpibsluker ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088407.1730826-418-199598522458595/AnsiballZ_dnf.py'
Jan 22 13:26:47 compute-2 sudo[48662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:47 compute-2 python3.9[48664]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:26:50 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:26:50 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:26:50 compute-2 systemd[1]: Reloading.
Jan 22 13:26:50 compute-2 systemd-rc-local-generator[48704]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:26:50 compute-2 systemd-sysv-generator[48707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:26:50 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:26:50 compute-2 sshd-session[48671]: Invalid user sol from 45.148.10.240 port 54908
Jan 22 13:26:50 compute-2 sshd-session[48671]: Connection closed by invalid user sol 45.148.10.240 port 54908 [preauth]
Jan 22 13:26:51 compute-2 sudo[48662]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:51 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:26:51 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:26:51 compute-2 systemd[1]: run-rf9eb6405d7ff4db9af28804d8ddafea6.service: Deactivated successfully.
Jan 22 13:26:52 compute-2 sudo[48980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcxsufccxrsjpzrhjyiyvaqtbezhopst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088411.9894073-442-112805082800425/AnsiballZ_systemd.py'
Jan 22 13:26:52 compute-2 sudo[48980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:52 compute-2 python3.9[48982]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:26:52 compute-2 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 13:26:52 compute-2 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 13:26:52 compute-2 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 13:26:52 compute-2 systemd[1]: Stopping Network Manager...
Jan 22 13:26:52 compute-2 NetworkManager[7199]: <info>  [1769088412.6801] caught SIGTERM, shutting down normally.
Jan 22 13:26:52 compute-2 NetworkManager[7199]: <info>  [1769088412.6826] dhcp4 (eth0): canceled DHCP transaction
Jan 22 13:26:52 compute-2 NetworkManager[7199]: <info>  [1769088412.6827] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 13:26:52 compute-2 NetworkManager[7199]: <info>  [1769088412.6827] dhcp4 (eth0): state changed no lease
Jan 22 13:26:52 compute-2 NetworkManager[7199]: <info>  [1769088412.6834] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 13:26:52 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 13:26:52 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 13:26:52 compute-2 NetworkManager[7199]: <info>  [1769088412.8911] exiting (success)
Jan 22 13:26:52 compute-2 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 13:26:52 compute-2 systemd[1]: Stopped Network Manager.
Jan 22 13:26:52 compute-2 systemd[1]: NetworkManager.service: Consumed 12.789s CPU time, 4.1M memory peak, read 0B from disk, written 41.5K to disk.
Jan 22 13:26:52 compute-2 systemd[1]: Starting Network Manager...
Jan 22 13:26:52 compute-2 NetworkManager[49000]: <info>  [1769088412.9635] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:24f4eb82-7451-47a9-a2ab-85f318c16b8a)
Jan 22 13:26:52 compute-2 NetworkManager[49000]: <info>  [1769088412.9636] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 13:26:52 compute-2 NetworkManager[49000]: <info>  [1769088412.9697] manager[0x55e326179000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 13:26:52 compute-2 systemd[1]: Starting Hostname Service...
Jan 22 13:26:53 compute-2 systemd[1]: Started Hostname Service.
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0600] hostname: hostname: using hostnamed
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0600] hostname: static hostname changed from (none) to "compute-2"
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0606] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0611] manager[0x55e326179000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0612] manager[0x55e326179000]: rfkill: WWAN hardware radio set enabled
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0633] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0642] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0643] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0644] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0645] manager: Networking is enabled by state file
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0647] settings: Loaded settings plugin: keyfile (internal)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0651] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0681] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0694] dhcp: init: Using DHCP client 'internal'
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0696] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0700] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0705] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0716] device (lo): Activation: starting connection 'lo' (4169075c-72f8-4434-940a-1a390ca696d3)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0722] device (eth0): carrier: link connected
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0726] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0732] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0734] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0741] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0746] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0752] device (eth1): carrier: link connected
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0756] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0762] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba) (indicated)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0762] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0769] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0777] device (eth1): Activation: starting connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0784] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 13:26:53 compute-2 systemd[1]: Started Network Manager.
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0794] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0797] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0799] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0803] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0806] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0809] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0813] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0818] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0826] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0830] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0839] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0851] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0861] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0866] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0872] device (lo): Activation: successful, device activated.
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0879] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0882] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0885] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0888] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0890] device (eth1): Activation: successful, device activated.
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.0901] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 13:26:53 compute-2 systemd[1]: Starting Network Manager Wait Online...
Jan 22 13:26:53 compute-2 sudo[48980]: pam_unix(sudo:session): session closed for user root
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.1932] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.2004] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.2005] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.2008] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.2010] device (eth0): Activation: successful, device activated.
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.2016] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 13:26:53 compute-2 NetworkManager[49000]: <info>  [1769088413.2356] manager: startup complete
Jan 22 13:26:53 compute-2 systemd[1]: Finished Network Manager Wait Online.
Jan 22 13:26:54 compute-2 sudo[49206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivfeyldgwwmiszqwddhlxzutkftxxltw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088413.8164835-466-163426407519023/AnsiballZ_dnf.py'
Jan 22 13:26:54 compute-2 sudo[49206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:26:54 compute-2 python3.9[49208]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:27:03 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 13:27:06 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:27:06 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:27:06 compute-2 systemd[1]: Reloading.
Jan 22 13:27:06 compute-2 systemd-rc-local-generator[49261]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:27:06 compute-2 systemd-sysv-generator[49265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:27:06 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:27:07 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:27:07 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:27:07 compute-2 systemd[1]: run-r22677baaabb740128278b5f46fbd6980.service: Deactivated successfully.
Jan 22 13:27:07 compute-2 sudo[49206]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:08 compute-2 sudo[49666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owqncrlpsazoceumlhcvusvdshstbndd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088428.3469975-503-227525878743678/AnsiballZ_stat.py'
Jan 22 13:27:08 compute-2 sudo[49666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:08 compute-2 python3.9[49668]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:27:08 compute-2 sudo[49666]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:09 compute-2 sudo[49818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehpxwdsgvjzbopzrklybxjemkcrbdfep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088429.1200776-529-102667820628859/AnsiballZ_ini_file.py'
Jan 22 13:27:09 compute-2 sudo[49818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:09 compute-2 python3.9[49820]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:09 compute-2 sudo[49818]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:10 compute-2 sudo[49972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgdclbaqmhjhexevgbpshfqgkiunaejj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088430.2013083-559-51971977960466/AnsiballZ_ini_file.py'
Jan 22 13:27:10 compute-2 sudo[49972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:10 compute-2 python3.9[49974]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:10 compute-2 sudo[49972]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:11 compute-2 sudo[50124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrquryagupjslxeljznqjwlfzejsqgge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088431.031776-559-66695286214822/AnsiballZ_ini_file.py'
Jan 22 13:27:11 compute-2 sudo[50124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:11 compute-2 python3.9[50126]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:11 compute-2 sudo[50124]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:12 compute-2 sudo[50276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwsuphpfqnjalxuyninashclrofhwwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088431.806323-604-237721419636226/AnsiballZ_ini_file.py'
Jan 22 13:27:12 compute-2 sudo[50276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:12 compute-2 python3.9[50278]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:12 compute-2 sudo[50276]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:12 compute-2 sudo[50428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwtxdyhtomifsdgafnbrrtdxguwmsvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088432.4632235-604-234368876324450/AnsiballZ_ini_file.py'
Jan 22 13:27:12 compute-2 sudo[50428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:12 compute-2 python3.9[50430]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:13 compute-2 sudo[50428]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:13 compute-2 sudo[50580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loyuzwcsgtudtbwwtxtlnrnbtlmwimrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088433.2086165-649-175126026326719/AnsiballZ_stat.py'
Jan 22 13:27:13 compute-2 sudo[50580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:13 compute-2 python3.9[50582]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:27:13 compute-2 sudo[50580]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:14 compute-2 sudo[50703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyioepuksoyvdnlzgsthloabyihfpaqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088433.2086165-649-175126026326719/AnsiballZ_copy.py'
Jan 22 13:27:14 compute-2 sudo[50703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:14 compute-2 python3.9[50705]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088433.2086165-649-175126026326719/.source _original_basename=.9x2j16ri follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:14 compute-2 sudo[50703]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:15 compute-2 sudo[50855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkcoowywmewfyjxzfiygyvzimryzhcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088434.753776-694-191679083985149/AnsiballZ_file.py'
Jan 22 13:27:15 compute-2 sudo[50855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:15 compute-2 python3.9[50857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:15 compute-2 sudo[50855]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:16 compute-2 sudo[51007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaveychcjgzfbwkxjbdrqkcmnvxakzap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088435.843331-719-4913409647865/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 22 13:27:16 compute-2 sudo[51007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:16 compute-2 python3.9[51009]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 22 13:27:16 compute-2 sudo[51007]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:17 compute-2 sudo[51159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsqrgsuxaobxvtayptvygfodwlutcvcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088436.832287-745-170483452163674/AnsiballZ_file.py'
Jan 22 13:27:17 compute-2 sudo[51159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:17 compute-2 python3.9[51161]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:17 compute-2 sudo[51159]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:18 compute-2 sudo[51311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfgeyrwqrfkihncpugxqnclqkqqnfmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088437.7390623-775-48413625441952/AnsiballZ_stat.py'
Jan 22 13:27:18 compute-2 sudo[51311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:18 compute-2 sudo[51311]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:19 compute-2 sudo[51434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqrilhjacnjngjgxosfuviqxvossmyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088437.7390623-775-48413625441952/AnsiballZ_copy.py'
Jan 22 13:27:19 compute-2 sudo[51434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:19 compute-2 sudo[51434]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:19 compute-2 sudo[51586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzpadazwihywbfxbujnvthtjrcwmvbkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088439.4937575-820-253126726679462/AnsiballZ_slurp.py'
Jan 22 13:27:19 compute-2 sudo[51586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:20 compute-2 python3.9[51588]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 22 13:27:20 compute-2 sudo[51586]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:21 compute-2 sudo[51761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdkdddzhtiohdfkrzodlzrhltsfokper ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5646186-847-154640478254585/async_wrapper.py j398345004378 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5646186-847-154640478254585/AnsiballZ_edpm_os_net_config.py _'
Jan 22 13:27:21 compute-2 sudo[51761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:21 compute-2 ansible-async_wrapper.py[51763]: Invoked with j398345004378 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5646186-847-154640478254585/AnsiballZ_edpm_os_net_config.py _
Jan 22 13:27:21 compute-2 ansible-async_wrapper.py[51766]: Starting module and watcher
Jan 22 13:27:21 compute-2 ansible-async_wrapper.py[51766]: Start watching 51767 (300)
Jan 22 13:27:21 compute-2 ansible-async_wrapper.py[51767]: Start module (51767)
Jan 22 13:27:21 compute-2 ansible-async_wrapper.py[51763]: Return async_wrapper task started.
Jan 22 13:27:21 compute-2 sudo[51761]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:22 compute-2 python3.9[51768]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 22 13:27:22 compute-2 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 22 13:27:22 compute-2 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 22 13:27:22 compute-2 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 22 13:27:22 compute-2 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 22 13:27:22 compute-2 kernel: cfg80211: failed to load regulatory.db
Jan 22 13:27:23 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9308] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9331] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9908] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9910] audit: op="connection-add" uuid="794ece31-c950-47e0-b112-d35532234c80" name="br-ex-br" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9927] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9929] audit: op="connection-add" uuid="cfa747da-58e6-4689-922d-9de70c75d190" name="br-ex-port" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9944] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9945] audit: op="connection-add" uuid="01d1f839-e308-47fa-9552-b2bf782de783" name="eth1-port" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9959] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9961] audit: op="connection-add" uuid="653c7484-2f46-4a61-bebe-aeb46aee2b4d" name="vlan20-port" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9978] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9980] audit: op="connection-add" uuid="7c6775d8-492e-4c9b-b693-de0f747bcd4b" name="vlan21-port" pid=51769 uid=0 result="success"
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9992] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 22 13:27:23 compute-2 NetworkManager[49000]: <info>  [1769088443.9994] audit: op="connection-add" uuid="d4d9d3b7-2ffe-45f3-93ea-99b12d620658" name="vlan22-port" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.0007] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.0009] audit: op="connection-add" uuid="510a7eb7-fa56-416d-80a8-585e183c87cb" name="vlan23-port" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.0030] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.0046] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.0048] audit: op="connection-add" uuid="cc2bdf83-bde6-4891-9ac7-1a16d6d2c96a" name="br-ex-if" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1829] audit: op="connection-update" uuid="dcaea49a-a5c5-5229-9667-55a0529b8fba" name="ci-private-network" args="ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ovs-interface.type,connection.timestamp,connection.master,connection.slave-type,connection.controller,connection.port-type,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ovs-external-ids.data" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1864] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1867] audit: op="connection-add" uuid="874673f3-da52-46f1-a439-0fc3d630c8a5" name="vlan20-if" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1897] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1899] audit: op="connection-add" uuid="20424f20-d962-437e-b725-715685dd4a3c" name="vlan21-if" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1928] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1931] audit: op="connection-add" uuid="74049661-d3e8-4640-8857-4d3b9096f66b" name="vlan22-if" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1962] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1965] audit: op="connection-add" uuid="03984fbf-a87a-4009-9d20-112f7b9dc3f6" name="vlan23-if" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.1986] audit: op="connection-delete" uuid="128e382a-734b-354e-b29c-4c5a72c08cb7" name="Wired connection 1" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2007] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2011] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2025] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2032] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (794ece31-c950-47e0-b112-d35532234c80)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2033] audit: op="connection-activate" uuid="794ece31-c950-47e0-b112-d35532234c80" name="br-ex-br" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2036] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2037] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2047] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2054] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (cfa747da-58e6-4689-922d-9de70c75d190)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2057] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2058] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2067] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2075] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (01d1f839-e308-47fa-9552-b2bf782de783)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2078] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2079] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2087] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2095] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (653c7484-2f46-4a61-bebe-aeb46aee2b4d)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2098] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2099] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2109] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2116] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7c6775d8-492e-4c9b-b693-de0f747bcd4b)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2118] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2121] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2130] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2139] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d4d9d3b7-2ffe-45f3-93ea-99b12d620658)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2142] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2145] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2155] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2164] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (510a7eb7-fa56-416d-80a8-585e183c87cb)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2166] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2170] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2173] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2184] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2185] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2189] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2196] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cc2bdf83-bde6-4891-9ac7-1a16d6d2c96a)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2197] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2204] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2208] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2210] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2213] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2231] device (eth1): disconnecting for new activation request.
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2232] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2238] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2242] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2244] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2248] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2250] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2255] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2262] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (874673f3-da52-46f1-a439-0fc3d630c8a5)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2263] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2268] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2271] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2273] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2278] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2280] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2284] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2291] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (20424f20-d962-437e-b725-715685dd4a3c)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2293] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2298] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2301] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2303] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2307] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2309] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2314] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2320] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (74049661-d3e8-4640-8857-4d3b9096f66b)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2322] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2327] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2330] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2332] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2337] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <warn>  [1769088444.2338] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2341] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2346] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (03984fbf-a87a-4009-9d20-112f7b9dc3f6)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2347] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2349] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2351] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2352] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2354] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2370] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2372] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2377] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2379] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2387] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2391] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2394] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2397] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2398] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2403] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2407] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2410] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2411] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2416] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2420] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2423] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2424] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2429] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2433] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2436] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2437] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2441] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2446] dhcp4 (eth0): canceled DHCP transaction
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2446] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2446] dhcp4 (eth0): state changed no lease
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2448] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2467] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51769 uid=0 result="fail" reason="Device is not activated"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2529] device (eth1): disconnecting for new activation request.
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2530] audit: op="connection-activate" uuid="dcaea49a-a5c5-5229-9667-55a0529b8fba" name="ci-private-network" pid=51769 uid=0 result="success"
Jan 22 13:27:24 compute-2 NetworkManager[49000]: <info>  [1769088444.2618] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 13:27:24 compute-2 kernel: ovs-system: entered promiscuous mode
Jan 22 13:27:24 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 13:27:24 compute-2 kernel: Timeout policy base is empty
Jan 22 13:27:24 compute-2 systemd-udevd[51775]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:27:24 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 13:27:24 compute-2 kernel: br-ex: entered promiscuous mode
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1775] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1797] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1808] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1813] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1814] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 13:27:25 compute-2 kernel: vlan20: entered promiscuous mode
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1953] device (eth1): Activation: starting connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1960] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1963] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1966] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1968] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1971] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1973] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1975] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1980] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.1995] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2002] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 kernel: vlan21: entered promiscuous mode
Jan 22 13:27:25 compute-2 systemd-udevd[51773]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2024] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2034] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2048] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2059] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2067] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2075] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2081] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2096] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2099] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2102] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2105] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2107] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2110] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2113] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2122] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 22 13:27:25 compute-2 kernel: vlan22: entered promiscuous mode
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2134] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2142] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2144] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2146] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2160] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 22 13:27:25 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2429] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2430] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2454] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2465] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2471] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2484] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 kernel: vlan23: entered promiscuous mode
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.2520] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4472] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4473] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4476] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4478] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4484] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4491] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4501] device (eth1): Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4508] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4516] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4526] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4540] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4586] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.4598] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 sudo[52091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vycmrxpgrfwdcpbdkgtlhqnzvcsypxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088445.0431128-847-94210666958249/AnsiballZ_async_status.py'
Jan 22 13:27:25 compute-2 sudo[52091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:25 compute-2 python3.9[52094]: ansible-ansible.legacy.async_status Invoked with jid=j398345004378.51763 mode=status _async_dir=/root/.ansible_async
Jan 22 13:27:25 compute-2 sudo[52091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.7518] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.7525] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.7533] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.7544] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.7555] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 13:27:25 compute-2 NetworkManager[49000]: <info>  [1769088445.7564] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 13:27:26 compute-2 ansible-async_wrapper.py[51766]: 51767 still running (300)
Jan 22 13:27:26 compute-2 NetworkManager[49000]: <info>  [1769088446.9385] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.0795] checkpoint[0x55e32614f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.0797] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.4760] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.4774] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.8138] audit: op="networking-control" arg="global-dns-configuration" pid=51769 uid=0 result="success"
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.8215] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.8518] audit: op="networking-control" arg="global-dns-configuration" pid=51769 uid=0 result="success"
Jan 22 13:27:27 compute-2 NetworkManager[49000]: <info>  [1769088447.8539] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 13:27:28 compute-2 NetworkManager[49000]: <info>  [1769088448.0119] checkpoint[0x55e32614fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 22 13:27:28 compute-2 NetworkManager[49000]: <info>  [1769088448.0122] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 13:27:28 compute-2 ansible-async_wrapper.py[51767]: Module complete (51767)
Jan 22 13:27:29 compute-2 sudo[52232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgpjfidamejuydvowuktltrkwmxkkedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088445.0431128-847-94210666958249/AnsiballZ_async_status.py'
Jan 22 13:27:29 compute-2 sudo[52232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:29 compute-2 python3.9[52234]: ansible-ansible.legacy.async_status Invoked with jid=j398345004378.51763 mode=status _async_dir=/root/.ansible_async
Jan 22 13:27:29 compute-2 sudo[52232]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:29 compute-2 sudo[52332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibxyuptqyriqckbgdazorwxqylmghfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088445.0431128-847-94210666958249/AnsiballZ_async_status.py'
Jan 22 13:27:29 compute-2 sudo[52332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:29 compute-2 python3.9[52334]: ansible-ansible.legacy.async_status Invoked with jid=j398345004378.51763 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 13:27:29 compute-2 sudo[52332]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:30 compute-2 sudo[52484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjsytjbvszwvdvsgcmvdrdvkpspbzsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088450.1840255-928-103982853368098/AnsiballZ_stat.py'
Jan 22 13:27:30 compute-2 sudo[52484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:30 compute-2 python3.9[52486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:27:30 compute-2 sudo[52484]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:31 compute-2 sudo[52607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcjflrypjnvviiinbwofgicmylhrhili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088450.1840255-928-103982853368098/AnsiballZ_copy.py'
Jan 22 13:27:31 compute-2 sudo[52607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:31 compute-2 python3.9[52609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088450.1840255-928-103982853368098/.source.returncode _original_basename=.u3cimw6s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:31 compute-2 sudo[52607]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:31 compute-2 ansible-async_wrapper.py[51766]: Done in kid B.
Jan 22 13:27:32 compute-2 sudo[52759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmeybpivqiphqdymdykbznymqdwodobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088451.8767743-976-127544732575938/AnsiballZ_stat.py'
Jan 22 13:27:32 compute-2 sudo[52759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:32 compute-2 python3.9[52761]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:27:32 compute-2 sudo[52759]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:32 compute-2 sudo[52883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cotcupivakxvxbndqsuvvqbyozgojnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088451.8767743-976-127544732575938/AnsiballZ_copy.py'
Jan 22 13:27:32 compute-2 sudo[52883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:33 compute-2 python3.9[52885]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088451.8767743-976-127544732575938/.source.cfg _original_basename=.nin0l4pl follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:33 compute-2 sudo[52883]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:33 compute-2 sudo[53035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwfxcvyzdqfqxxiejqmocpuopkmhpbir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088453.330527-1022-113376341573846/AnsiballZ_systemd.py'
Jan 22 13:27:33 compute-2 sudo[53035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:33 compute-2 python3.9[53037]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:27:33 compute-2 systemd[1]: Reloading Network Manager...
Jan 22 13:27:33 compute-2 NetworkManager[49000]: <info>  [1769088453.9419] audit: op="reload" arg="0" pid=53041 uid=0 result="success"
Jan 22 13:27:33 compute-2 NetworkManager[49000]: <info>  [1769088453.9425] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 22 13:27:34 compute-2 systemd[1]: Reloaded Network Manager.
Jan 22 13:27:34 compute-2 sudo[53035]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:35 compute-2 sshd-session[44997]: Connection closed by 192.168.122.30 port 48262
Jan 22 13:27:35 compute-2 sshd-session[44994]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:27:35 compute-2 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Jan 22 13:27:35 compute-2 systemd[1]: session-11.scope: Deactivated successfully.
Jan 22 13:27:35 compute-2 systemd[1]: session-11.scope: Consumed 50.847s CPU time.
Jan 22 13:27:35 compute-2 systemd-logind[787]: Removed session 11.
Jan 22 13:27:36 compute-2 sshd-session[53071]: Invalid user export from 69.12.83.184 port 58498
Jan 22 13:27:36 compute-2 sshd-session[53071]: Connection closed by invalid user export 69.12.83.184 port 58498 [preauth]
Jan 22 13:27:40 compute-2 sshd-session[53074]: Accepted publickey for zuul from 192.168.122.30 port 48744 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:27:40 compute-2 systemd-logind[787]: New session 12 of user zuul.
Jan 22 13:27:40 compute-2 systemd[1]: Started Session 12 of User zuul.
Jan 22 13:27:40 compute-2 sshd-session[53074]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:27:41 compute-2 python3.9[53227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:27:42 compute-2 python3.9[53382]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:27:44 compute-2 python3.9[53575]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:27:44 compute-2 sshd-session[53577]: Invalid user bkp from 69.12.83.184 port 58552
Jan 22 13:27:44 compute-2 sshd-session[53077]: Connection closed by 192.168.122.30 port 48744
Jan 22 13:27:44 compute-2 sshd-session[53074]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:27:44 compute-2 systemd[1]: session-12.scope: Deactivated successfully.
Jan 22 13:27:44 compute-2 systemd[1]: session-12.scope: Consumed 2.369s CPU time.
Jan 22 13:27:44 compute-2 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Jan 22 13:27:44 compute-2 systemd-logind[787]: Removed session 12.
Jan 22 13:27:44 compute-2 sshd-session[53577]: Connection closed by invalid user bkp 69.12.83.184 port 58552 [preauth]
Jan 22 13:27:44 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 13:27:50 compute-2 sshd-session[53606]: Accepted publickey for zuul from 192.168.122.30 port 59958 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:27:50 compute-2 systemd-logind[787]: New session 13 of user zuul.
Jan 22 13:27:50 compute-2 systemd[1]: Started Session 13 of User zuul.
Jan 22 13:27:50 compute-2 sshd-session[53606]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:27:51 compute-2 python3.9[53759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:27:52 compute-2 python3.9[53914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:27:53 compute-2 sudo[54068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwcvbhvoeyepfqecfncnvuzzfpnqptzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088472.7984798-81-131065219405206/AnsiballZ_setup.py'
Jan 22 13:27:53 compute-2 sudo[54068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:53 compute-2 python3.9[54070]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:27:53 compute-2 sudo[54068]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:54 compute-2 sudo[54152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxmyhihhfyuejrvcttlvwxcexpelwzft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088472.7984798-81-131065219405206/AnsiballZ_dnf.py'
Jan 22 13:27:54 compute-2 sudo[54152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:54 compute-2 python3.9[54154]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:27:56 compute-2 sudo[54152]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:56 compute-2 sudo[54306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqjfavpsvaoxqencfubkgqfcmnuudny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088476.5976455-117-243147799109100/AnsiballZ_setup.py'
Jan 22 13:27:56 compute-2 sudo[54306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:57 compute-2 python3.9[54308]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:27:57 compute-2 sudo[54306]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:58 compute-2 sudo[54501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydezjrwnmrevzqzbpajioxzcbneirntf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088478.147756-151-48393985289782/AnsiballZ_file.py'
Jan 22 13:27:58 compute-2 sudo[54501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:27:58 compute-2 python3.9[54503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:27:58 compute-2 sudo[54501]: pam_unix(sudo:session): session closed for user root
Jan 22 13:27:59 compute-2 sudo[54653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztchvoatcrzolkuuguolqznfkwmunzzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088479.4749498-175-127348425872331/AnsiballZ_command.py'
Jan 22 13:27:59 compute-2 sudo[54653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:00 compute-2 python3.9[54655]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:28:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat3623228423-merged.mount: Deactivated successfully.
Jan 22 13:28:00 compute-2 podman[54656]: 2026-01-22 13:28:00.666754998 +0000 UTC m=+0.477533541 system refresh
Jan 22 13:28:00 compute-2 sudo[54653]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:01 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:28:01 compute-2 sudo[54816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqujdrmujbtubvbljcukgemrvnfghhzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088480.9226375-198-133785573338893/AnsiballZ_stat.py'
Jan 22 13:28:01 compute-2 sudo[54816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:01 compute-2 python3.9[54818]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:01 compute-2 sudo[54816]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:02 compute-2 sudo[54939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavyuvuutvprnnvxidqplefjwjpxzpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088480.9226375-198-133785573338893/AnsiballZ_copy.py'
Jan 22 13:28:02 compute-2 sudo[54939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:02 compute-2 python3.9[54941]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088480.9226375-198-133785573338893/.source.json follow=False _original_basename=podman_network_config.j2 checksum=0c46a80e07b38ef47d30b351f23b4c464d4715e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:02 compute-2 sudo[54939]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:03 compute-2 sudo[55091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woltpewxagcpyeihhkvuqejcyiujpnxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088482.6841474-244-90054794372535/AnsiballZ_stat.py'
Jan 22 13:28:03 compute-2 sudo[55091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:03 compute-2 python3.9[55093]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:03 compute-2 sudo[55091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:03 compute-2 sudo[55214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auwuqbaqgteolmfvbnakethjkingpcga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088482.6841474-244-90054794372535/AnsiballZ_copy.py'
Jan 22 13:28:03 compute-2 sudo[55214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:03 compute-2 python3.9[55216]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088482.6841474-244-90054794372535/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5a3e69bacb50e2daad69ea0ffc6501536059b061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:03 compute-2 sudo[55214]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:04 compute-2 sudo[55366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwddbhtpukbpxemajuxonohzuybrmviw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088484.0701756-291-33309230803302/AnsiballZ_ini_file.py'
Jan 22 13:28:04 compute-2 sudo[55366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:04 compute-2 python3.9[55368]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:04 compute-2 sudo[55366]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:05 compute-2 sudo[55518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lztadrdttmctiaqqxhdbtneeiiisjjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088484.8495622-291-104221347165681/AnsiballZ_ini_file.py'
Jan 22 13:28:05 compute-2 sudo[55518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:05 compute-2 python3.9[55520]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:05 compute-2 sudo[55518]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:06 compute-2 sudo[55670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmzjxnzpxfessuxbmtlacoenjgwjqgrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088485.7018375-291-252369936715602/AnsiballZ_ini_file.py'
Jan 22 13:28:06 compute-2 sudo[55670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:06 compute-2 python3.9[55672]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:06 compute-2 sudo[55670]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:06 compute-2 sudo[55822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvtyghipgzilotrzgjtsfeyzwwnrzsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088486.3699563-291-184540209470888/AnsiballZ_ini_file.py'
Jan 22 13:28:06 compute-2 sudo[55822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:06 compute-2 python3.9[55824]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:06 compute-2 sudo[55822]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:07 compute-2 sudo[55974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhojygrmxizjarluaobftojyjsypdfci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088487.261163-384-112924183258482/AnsiballZ_dnf.py'
Jan 22 13:28:07 compute-2 sudo[55974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:07 compute-2 python3.9[55976]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:28:09 compute-2 sudo[55974]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:10 compute-2 sudo[56127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvmomhrtyayfmjspxwygexrauxvprmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088489.8676538-417-125993435422233/AnsiballZ_setup.py'
Jan 22 13:28:10 compute-2 sudo[56127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:10 compute-2 python3.9[56129]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:28:10 compute-2 sudo[56127]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:10 compute-2 sudo[56281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dztgoyiikoeopetcpulvrtuxvcwqtnqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088490.725321-441-56223147808124/AnsiballZ_stat.py'
Jan 22 13:28:10 compute-2 sudo[56281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:11 compute-2 python3.9[56283]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:28:11 compute-2 sudo[56281]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:11 compute-2 sudo[56433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdocgqrcbtqtfntuqqbsxzyybolapvge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088491.5229943-469-166506211777774/AnsiballZ_stat.py'
Jan 22 13:28:11 compute-2 sudo[56433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:11 compute-2 python3.9[56435]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:28:12 compute-2 sudo[56433]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:12 compute-2 sudo[56585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnluewczesmrljfcevosscohdnfgfeoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088492.3399775-499-54096164525479/AnsiballZ_command.py'
Jan 22 13:28:12 compute-2 sudo[56585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:12 compute-2 python3.9[56587]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:28:12 compute-2 sudo[56585]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:13 compute-2 sudo[56738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odyfertebqyvzxdmlozihuxuypwscksv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088493.2925572-528-48263306866829/AnsiballZ_service_facts.py'
Jan 22 13:28:13 compute-2 sudo[56738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:13 compute-2 python3.9[56740]: ansible-service_facts Invoked
Jan 22 13:28:13 compute-2 network[56757]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:28:13 compute-2 network[56758]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:28:13 compute-2 network[56759]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:28:16 compute-2 sudo[56738]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:19 compute-2 sudo[57042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kudtqhdljuchktktznvfnewobrkrgwzu ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769088499.055614-574-127295434353813/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769088499.055614-574-127295434353813/args'
Jan 22 13:28:19 compute-2 sudo[57042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:19 compute-2 sudo[57042]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:20 compute-2 sudo[57209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktajlvtfegzkehyvntgpctltmlhrrgub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088499.8358753-606-161927398479084/AnsiballZ_dnf.py'
Jan 22 13:28:20 compute-2 sudo[57209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:20 compute-2 python3.9[57211]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:28:21 compute-2 sudo[57209]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:23 compute-2 sudo[57363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbjrajhqxiubvczkgiybmdzcajlznawd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088502.6555083-646-233458408048190/AnsiballZ_package_facts.py'
Jan 22 13:28:23 compute-2 sudo[57363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:23 compute-2 python3.9[57365]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 13:28:23 compute-2 sudo[57363]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:24 compute-2 sshd-session[57213]: Invalid user 1 from 69.12.83.184 port 58690
Jan 22 13:28:24 compute-2 sshd-session[57213]: Connection closed by invalid user 1 69.12.83.184 port 58690 [preauth]
Jan 22 13:28:25 compute-2 sudo[57516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdeftzsjguzumzyeimvrkrgkixxkksed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088504.9399352-677-215326738337300/AnsiballZ_stat.py'
Jan 22 13:28:25 compute-2 sudo[57516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:25 compute-2 python3.9[57518]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:25 compute-2 sudo[57516]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:25 compute-2 sudo[57641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grnenbkzqzliixqqaeeqrwvweocoxepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088504.9399352-677-215326738337300/AnsiballZ_copy.py'
Jan 22 13:28:25 compute-2 sudo[57641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:26 compute-2 python3.9[57643]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088504.9399352-677-215326738337300/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:26 compute-2 sudo[57641]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:26 compute-2 sudo[57795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyhvdpunetmrjmaomilbbussvmroouif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088506.4549255-721-243239578315662/AnsiballZ_stat.py'
Jan 22 13:28:26 compute-2 sudo[57795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:26 compute-2 python3.9[57797]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:26 compute-2 sudo[57795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:27 compute-2 sudo[57920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxymohluwtdfqeyoynooznivuljmueaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088506.4549255-721-243239578315662/AnsiballZ_copy.py'
Jan 22 13:28:27 compute-2 sudo[57920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:27 compute-2 python3.9[57922]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088506.4549255-721-243239578315662/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:27 compute-2 sudo[57920]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:29 compute-2 sudo[58074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkoktrdrzerkshrzzjwcamenfbttkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088508.7700868-785-202667125794026/AnsiballZ_lineinfile.py'
Jan 22 13:28:29 compute-2 sudo[58074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:29 compute-2 python3.9[58076]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:29 compute-2 sudo[58074]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:30 compute-2 sudo[58228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyqjjlyttmeebmznjrqnwupwiyxulkge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088510.6736405-830-21758835354354/AnsiballZ_setup.py'
Jan 22 13:28:30 compute-2 sudo[58228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:31 compute-2 python3.9[58230]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:28:31 compute-2 sudo[58228]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:32 compute-2 sudo[58312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irncmdncvfxbaghqdmesripflrzurwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088510.6736405-830-21758835354354/AnsiballZ_systemd.py'
Jan 22 13:28:32 compute-2 sudo[58312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:32 compute-2 python3.9[58314]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:28:32 compute-2 sudo[58312]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:33 compute-2 sudo[58466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvvdjsbhlibrigznkflluszgcakvwgqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088513.4627528-878-138145665353048/AnsiballZ_setup.py'
Jan 22 13:28:33 compute-2 sudo[58466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:34 compute-2 python3.9[58468]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:28:34 compute-2 sudo[58466]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:34 compute-2 sudo[58550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olhtfofacvoyzydzhwskmqpntozfwbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088513.4627528-878-138145665353048/AnsiballZ_systemd.py'
Jan 22 13:28:34 compute-2 sudo[58550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:34 compute-2 python3.9[58552]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:28:34 compute-2 chronyd[798]: chronyd exiting
Jan 22 13:28:34 compute-2 systemd[1]: Stopping NTP client/server...
Jan 22 13:28:34 compute-2 systemd[1]: chronyd.service: Deactivated successfully.
Jan 22 13:28:34 compute-2 systemd[1]: Stopped NTP client/server.
Jan 22 13:28:34 compute-2 systemd[1]: Starting NTP client/server...
Jan 22 13:28:34 compute-2 chronyd[58561]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 13:28:34 compute-2 chronyd[58561]: Frequency -26.130 +/- 0.081 ppm read from /var/lib/chrony/drift
Jan 22 13:28:34 compute-2 chronyd[58561]: Loaded seccomp filter (level 2)
Jan 22 13:28:34 compute-2 systemd[1]: Started NTP client/server.
Jan 22 13:28:34 compute-2 sudo[58550]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:35 compute-2 sshd-session[53609]: Connection closed by 192.168.122.30 port 59958
Jan 22 13:28:35 compute-2 sshd-session[53606]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:28:35 compute-2 systemd[1]: session-13.scope: Deactivated successfully.
Jan 22 13:28:35 compute-2 systemd[1]: session-13.scope: Consumed 26.825s CPU time.
Jan 22 13:28:35 compute-2 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Jan 22 13:28:35 compute-2 systemd-logind[787]: Removed session 13.
Jan 22 13:28:41 compute-2 sshd-session[58587]: Accepted publickey for zuul from 192.168.122.30 port 38394 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:28:41 compute-2 systemd-logind[787]: New session 14 of user zuul.
Jan 22 13:28:41 compute-2 systemd[1]: Started Session 14 of User zuul.
Jan 22 13:28:41 compute-2 sshd-session[58587]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:28:41 compute-2 sudo[58740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tflccuppujrmwbkbhtmdtdwhujikcpog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088521.4343522-29-238159402718430/AnsiballZ_file.py'
Jan 22 13:28:41 compute-2 sudo[58740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:43 compute-2 python3.9[58742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:43 compute-2 sudo[58740]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:44 compute-2 sudo[58892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qitkoibbsemcdjyxjuhlhswiaeihtolp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088523.6926246-64-62528235904327/AnsiballZ_stat.py'
Jan 22 13:28:44 compute-2 sudo[58892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:44 compute-2 python3.9[58894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:44 compute-2 sudo[58892]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:44 compute-2 sudo[59015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdyicsbiiqozgatfdsdbubbnsniowqep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088523.6926246-64-62528235904327/AnsiballZ_copy.py'
Jan 22 13:28:44 compute-2 sudo[59015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:45 compute-2 python3.9[59017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088523.6926246-64-62528235904327/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:45 compute-2 sudo[59015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:45 compute-2 sshd-session[58590]: Connection closed by 192.168.122.30 port 38394
Jan 22 13:28:45 compute-2 sshd-session[58587]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:28:45 compute-2 systemd[1]: session-14.scope: Deactivated successfully.
Jan 22 13:28:45 compute-2 systemd[1]: session-14.scope: Consumed 1.606s CPU time.
Jan 22 13:28:45 compute-2 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Jan 22 13:28:45 compute-2 systemd-logind[787]: Removed session 14.
Jan 22 13:28:51 compute-2 sshd-session[59042]: Accepted publickey for zuul from 192.168.122.30 port 34222 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:28:51 compute-2 systemd-logind[787]: New session 15 of user zuul.
Jan 22 13:28:51 compute-2 systemd[1]: Started Session 15 of User zuul.
Jan 22 13:28:51 compute-2 sshd-session[59042]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:28:52 compute-2 python3.9[59195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:28:53 compute-2 sudo[59349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpnlwspstuommsmkeioflpwhilzrrlzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088533.2353938-61-102021603401176/AnsiballZ_file.py'
Jan 22 13:28:53 compute-2 sudo[59349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:53 compute-2 python3.9[59351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:53 compute-2 sudo[59349]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:54 compute-2 sudo[59524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydrxfbzpbbqcvvgqmfkkeninxoseepd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088534.295648-85-44645175631/AnsiballZ_stat.py'
Jan 22 13:28:54 compute-2 sudo[59524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:55 compute-2 python3.9[59526]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:55 compute-2 sudo[59524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:55 compute-2 sudo[59647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkumsigqnoqbpaofqjmafqaacyxrnhwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088534.295648-85-44645175631/AnsiballZ_copy.py'
Jan 22 13:28:55 compute-2 sudo[59647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:55 compute-2 python3.9[59649]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769088534.295648-85-44645175631/.source.json _original_basename=.20cdxh6o follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:55 compute-2 sudo[59647]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:56 compute-2 sudo[59799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjhlayuoikjonwcefqytrqsnofwiumn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088536.4608045-155-178240899965189/AnsiballZ_stat.py'
Jan 22 13:28:56 compute-2 sudo[59799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:56 compute-2 python3.9[59801]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:56 compute-2 sudo[59799]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:57 compute-2 sudo[59922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giwcuipvjstqrmanevnlkzvlhynhaayh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088536.4608045-155-178240899965189/AnsiballZ_copy.py'
Jan 22 13:28:57 compute-2 sudo[59922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:57 compute-2 python3.9[59924]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088536.4608045-155-178240899965189/.source _original_basename=.y0j3uq0e follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:28:57 compute-2 sudo[59922]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:58 compute-2 sudo[60074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpiohxgmewtgawnnolppjppodnpwbpzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088538.2113829-203-167446215403859/AnsiballZ_file.py'
Jan 22 13:28:58 compute-2 sudo[60074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:58 compute-2 python3.9[60076]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:28:58 compute-2 sudo[60074]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:59 compute-2 sudo[60226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebymrpkbyghcragcwpijoeuabznjucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088539.0509212-227-28165186001298/AnsiballZ_stat.py'
Jan 22 13:28:59 compute-2 sudo[60226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:28:59 compute-2 python3.9[60228]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:28:59 compute-2 sudo[60226]: pam_unix(sudo:session): session closed for user root
Jan 22 13:28:59 compute-2 sudo[60349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgeblukpothbswfqwhdekvdkrjwootum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088539.0509212-227-28165186001298/AnsiballZ_copy.py'
Jan 22 13:28:59 compute-2 sudo[60349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:00 compute-2 python3.9[60351]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088539.0509212-227-28165186001298/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:29:00 compute-2 sudo[60349]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:00 compute-2 sudo[60501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejocdejplxoczlajuidbxmjcuagtrld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088540.378298-227-224958880369007/AnsiballZ_stat.py'
Jan 22 13:29:00 compute-2 sudo[60501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:00 compute-2 python3.9[60503]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:00 compute-2 sudo[60501]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:01 compute-2 sudo[60624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zavvxehgrsojgbxoydigmwglisnjjvuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088540.378298-227-224958880369007/AnsiballZ_copy.py'
Jan 22 13:29:01 compute-2 sudo[60624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:01 compute-2 python3.9[60626]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088540.378298-227-224958880369007/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:29:01 compute-2 sudo[60624]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:01 compute-2 anacron[8202]: Job `cron.weekly' started
Jan 22 13:29:01 compute-2 anacron[8202]: Job `cron.weekly' terminated
Jan 22 13:29:02 compute-2 sudo[60778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbuqcfipvqqwmtsqfprrzawjivyybilk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088542.1627748-313-95504487237578/AnsiballZ_file.py'
Jan 22 13:29:02 compute-2 sudo[60778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:02 compute-2 python3.9[60780]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:02 compute-2 sudo[60778]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:03 compute-2 sudo[60930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebijnclqmqimpxtmpyomakxktnprhypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088542.9345155-338-206018036371495/AnsiballZ_stat.py'
Jan 22 13:29:03 compute-2 sudo[60930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:03 compute-2 python3.9[60932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:03 compute-2 sudo[60930]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:03 compute-2 sshd-session[60933]: Invalid user sol from 45.148.10.240 port 60182
Jan 22 13:29:03 compute-2 sshd-session[60933]: Connection closed by invalid user sol 45.148.10.240 port 60182 [preauth]
Jan 22 13:29:03 compute-2 sudo[61055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdgkeqtdceevqaqlbiaielffmbuswjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088542.9345155-338-206018036371495/AnsiballZ_copy.py'
Jan 22 13:29:03 compute-2 sudo[61055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:05 compute-2 python3.9[61057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088542.9345155-338-206018036371495/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:05 compute-2 sudo[61055]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:06 compute-2 sudo[61207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htipnjsgsrojljvgcxjhqygakvnexemu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088545.6803098-383-60318238510431/AnsiballZ_stat.py'
Jan 22 13:29:06 compute-2 sudo[61207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:06 compute-2 python3.9[61209]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:06 compute-2 sudo[61207]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:06 compute-2 sudo[61330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmhimryizglgcbgowrmjzlzzpiaibepa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088545.6803098-383-60318238510431/AnsiballZ_copy.py'
Jan 22 13:29:06 compute-2 sudo[61330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:06 compute-2 python3.9[61332]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088545.6803098-383-60318238510431/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:06 compute-2 sudo[61330]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:07 compute-2 sudo[61483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvegwclkfzsrosxyvwuajyxeznhdvlsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088547.0161443-427-111058997492499/AnsiballZ_systemd.py'
Jan 22 13:29:07 compute-2 sudo[61483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:07 compute-2 python3.9[61485]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:07 compute-2 systemd[1]: Reloading.
Jan 22 13:29:08 compute-2 systemd-rc-local-generator[61512]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:08 compute-2 systemd-sysv-generator[61517]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:08 compute-2 systemd[1]: Reloading.
Jan 22 13:29:08 compute-2 systemd-sysv-generator[61550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:08 compute-2 systemd-rc-local-generator[61547]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:08 compute-2 systemd[1]: Starting EDPM Container Shutdown...
Jan 22 13:29:08 compute-2 systemd[1]: Finished EDPM Container Shutdown.
Jan 22 13:29:08 compute-2 sudo[61483]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:09 compute-2 sudo[61711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggrnomwrjoobqiqsspnbyltdnpgqppzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088548.9560397-452-46764105561815/AnsiballZ_stat.py'
Jan 22 13:29:09 compute-2 sudo[61711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:09 compute-2 python3.9[61713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:09 compute-2 sudo[61711]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:09 compute-2 sudo[61834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcbpfrbywqlfrcpqqqsnuumjacijlux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088548.9560397-452-46764105561815/AnsiballZ_copy.py'
Jan 22 13:29:09 compute-2 sudo[61834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:09 compute-2 python3.9[61836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088548.9560397-452-46764105561815/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:09 compute-2 sudo[61834]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:10 compute-2 sudo[61986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbgoyuxzlnwpqjhlyqhoxpqirdkvoesw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088550.2500665-496-31084238662092/AnsiballZ_stat.py'
Jan 22 13:29:10 compute-2 sudo[61986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:10 compute-2 python3.9[61988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:10 compute-2 sudo[61986]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:11 compute-2 sudo[62109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifvtanqbisfujyjypebpxyrtlwddkoyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088550.2500665-496-31084238662092/AnsiballZ_copy.py'
Jan 22 13:29:11 compute-2 sudo[62109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:11 compute-2 python3.9[62111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088550.2500665-496-31084238662092/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:11 compute-2 sudo[62109]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:11 compute-2 sudo[62261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htccxmkleptyjwwisiaqhgaxgjdvliyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088551.628187-541-3764540121267/AnsiballZ_systemd.py'
Jan 22 13:29:11 compute-2 sudo[62261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:12 compute-2 python3.9[62263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:12 compute-2 systemd[1]: Reloading.
Jan 22 13:29:12 compute-2 systemd-rc-local-generator[62286]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:12 compute-2 systemd-sysv-generator[62291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:12 compute-2 systemd[1]: Reloading.
Jan 22 13:29:12 compute-2 systemd-rc-local-generator[62329]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:12 compute-2 systemd-sysv-generator[62333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:12 compute-2 systemd[1]: Starting Create netns directory...
Jan 22 13:29:12 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:29:12 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:29:12 compute-2 systemd[1]: Finished Create netns directory.
Jan 22 13:29:12 compute-2 sudo[62261]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:13 compute-2 python3.9[62491]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:29:13 compute-2 network[62508]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:29:13 compute-2 network[62509]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:29:13 compute-2 network[62510]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:29:20 compute-2 sudo[62771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scpcmjjmgriblkthjszhcszkcarhgvkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088559.7857022-589-146473261597258/AnsiballZ_systemd.py'
Jan 22 13:29:20 compute-2 sudo[62771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:20 compute-2 python3.9[62773]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:20 compute-2 systemd[1]: Reloading.
Jan 22 13:29:20 compute-2 systemd-rc-local-generator[62798]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:20 compute-2 systemd-sysv-generator[62803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:20 compute-2 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 22 13:29:20 compute-2 iptables.init[62813]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 22 13:29:21 compute-2 iptables.init[62813]: iptables: Flushing firewall rules: [  OK  ]
Jan 22 13:29:21 compute-2 systemd[1]: iptables.service: Deactivated successfully.
Jan 22 13:29:21 compute-2 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 22 13:29:21 compute-2 sudo[62771]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:21 compute-2 sudo[63007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucagavwxrlgiltminqgrqbiyoasmublt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088561.3635335-589-81961219978717/AnsiballZ_systemd.py'
Jan 22 13:29:21 compute-2 sudo[63007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:21 compute-2 python3.9[63009]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:21 compute-2 sudo[63007]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:22 compute-2 sudo[63161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mndgcppqkfcxdcmqjpkcvqrfzffttatk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088562.3975508-638-219923928233967/AnsiballZ_systemd.py'
Jan 22 13:29:22 compute-2 sudo[63161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:22 compute-2 python3.9[63163]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:29:23 compute-2 systemd[1]: Reloading.
Jan 22 13:29:23 compute-2 systemd-sysv-generator[63197]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:29:23 compute-2 systemd-rc-local-generator[63192]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:29:23 compute-2 systemd[1]: Starting Netfilter Tables...
Jan 22 13:29:23 compute-2 systemd[1]: Finished Netfilter Tables.
Jan 22 13:29:23 compute-2 sudo[63161]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:24 compute-2 sudo[63354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiywlyrldmkqnodcridswxrczqvwewqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088563.904041-661-12802822892330/AnsiballZ_command.py'
Jan 22 13:29:24 compute-2 sudo[63354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:29 compute-2 python3.9[63356]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:29 compute-2 sudo[63354]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:30 compute-2 sudo[63507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzxgpxgbqobdwhhepsodwvrsxguzieki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088570.1239164-703-250404812380356/AnsiballZ_stat.py'
Jan 22 13:29:30 compute-2 sudo[63507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:30 compute-2 python3.9[63509]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:30 compute-2 sudo[63507]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:30 compute-2 sudo[63632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwdvyafhurxcvbkshbhqhboyyugibjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088570.1239164-703-250404812380356/AnsiballZ_copy.py'
Jan 22 13:29:30 compute-2 sudo[63632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:31 compute-2 python3.9[63634]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088570.1239164-703-250404812380356/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:31 compute-2 sudo[63632]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:32 compute-2 sudo[63785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clsczlritpqgkpjykmvvujdqfdvsipnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088572.001788-748-200521072187391/AnsiballZ_systemd.py'
Jan 22 13:29:32 compute-2 sudo[63785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:32 compute-2 python3.9[63787]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:29:32 compute-2 systemd[1]: Reloading OpenSSH server daemon...
Jan 22 13:29:32 compute-2 sshd[1003]: Received SIGHUP; restarting.
Jan 22 13:29:32 compute-2 systemd[1]: Reloaded OpenSSH server daemon.
Jan 22 13:29:32 compute-2 sshd[1003]: Server listening on 0.0.0.0 port 22.
Jan 22 13:29:32 compute-2 sshd[1003]: Server listening on :: port 22.
Jan 22 13:29:32 compute-2 sudo[63785]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:33 compute-2 sudo[63941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stqjzbrktgjtameldboflxvhdhzaxecl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088573.0341449-773-273169636307145/AnsiballZ_file.py'
Jan 22 13:29:33 compute-2 sudo[63941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:33 compute-2 python3.9[63943]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:33 compute-2 sudo[63941]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:34 compute-2 sudo[64093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbvjaswfvrrvhieiotezikpxjhuvgsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088573.713581-797-115470744774321/AnsiballZ_stat.py'
Jan 22 13:29:34 compute-2 sudo[64093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:34 compute-2 python3.9[64095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:34 compute-2 sudo[64093]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:34 compute-2 sudo[64216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvuroypugyawaegyiimesrllhgpiinzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088573.713581-797-115470744774321/AnsiballZ_copy.py'
Jan 22 13:29:34 compute-2 sudo[64216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:34 compute-2 python3.9[64218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088573.713581-797-115470744774321/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:34 compute-2 sudo[64216]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:35 compute-2 sudo[64368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfelqrbirdivjnkfgwnvnpxxxaddgnst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088575.3365884-851-278508610591037/AnsiballZ_timezone.py'
Jan 22 13:29:35 compute-2 sudo[64368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:36 compute-2 python3.9[64370]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 13:29:36 compute-2 systemd[1]: Starting Time & Date Service...
Jan 22 13:29:36 compute-2 systemd[1]: Started Time & Date Service.
Jan 22 13:29:36 compute-2 sudo[64368]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:36 compute-2 sudo[64524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsvmfqqipgjxatmlpsnxigmlkkcvzwlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088576.483893-878-55899928631276/AnsiballZ_file.py'
Jan 22 13:29:36 compute-2 sudo[64524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:37 compute-2 python3.9[64526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:37 compute-2 sudo[64524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:37 compute-2 sudo[64676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynsbqmuolabxfbfwvsivpzpdpptxebvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088577.2856202-902-208474160571953/AnsiballZ_stat.py'
Jan 22 13:29:37 compute-2 sudo[64676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:37 compute-2 python3.9[64678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:37 compute-2 sudo[64676]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:38 compute-2 sudo[64799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vehujcrmtsqiwhdeaabhzufnhqwfgalo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088577.2856202-902-208474160571953/AnsiballZ_copy.py'
Jan 22 13:29:38 compute-2 sudo[64799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:38 compute-2 python3.9[64801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088577.2856202-902-208474160571953/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:38 compute-2 sudo[64799]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:38 compute-2 sudo[64951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywjpxhbxeypcswfudpxbwusqcxrvorg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088578.6656504-947-253414018419296/AnsiballZ_stat.py'
Jan 22 13:29:38 compute-2 sudo[64951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:39 compute-2 python3.9[64953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:39 compute-2 sudo[64951]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:39 compute-2 sudo[65074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iptmpwutujbzvfwbarppvkekntaorkhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088578.6656504-947-253414018419296/AnsiballZ_copy.py'
Jan 22 13:29:39 compute-2 sudo[65074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:39 compute-2 python3.9[65076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088578.6656504-947-253414018419296/.source.yaml _original_basename=.05vf8810 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:39 compute-2 sudo[65074]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:40 compute-2 sudo[65226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lowffdmggwhgpujmmsnfvblldgzxwvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088579.9630961-992-58148605538754/AnsiballZ_stat.py'
Jan 22 13:29:40 compute-2 sudo[65226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:40 compute-2 python3.9[65228]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:40 compute-2 sudo[65226]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:40 compute-2 sudo[65349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brluioelyurqldxkfvfjjuuvtpiqtyjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088579.9630961-992-58148605538754/AnsiballZ_copy.py'
Jan 22 13:29:40 compute-2 sudo[65349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:40 compute-2 python3.9[65351]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088579.9630961-992-58148605538754/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:40 compute-2 sudo[65349]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:41 compute-2 sudo[65501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnshblbolubtbpsmxejrjjwtxxjqwary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088581.2196493-1036-208602030693959/AnsiballZ_command.py'
Jan 22 13:29:41 compute-2 sudo[65501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:41 compute-2 python3.9[65503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:41 compute-2 sudo[65501]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:42 compute-2 sudo[65654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizmnrpnzgtexzaqmbltxzvlsbfaepsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088581.9549692-1061-112964586452458/AnsiballZ_command.py'
Jan 22 13:29:42 compute-2 sudo[65654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:42 compute-2 python3.9[65656]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:42 compute-2 sudo[65654]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:43 compute-2 sudo[65807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwzqajbsietoqcqinfzrchwoiugjaxhc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769088582.8759425-1085-63857250728424/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:29:43 compute-2 sudo[65807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:43 compute-2 python3[65809]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:29:43 compute-2 sudo[65807]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:44 compute-2 sudo[65959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfpaqejsxapzdgugxfowotpytmdnpkfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088583.799714-1109-168153337736363/AnsiballZ_stat.py'
Jan 22 13:29:44 compute-2 sudo[65959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:44 compute-2 python3.9[65961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:44 compute-2 sudo[65959]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:44 compute-2 sudo[66082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimaolabultthwhvgxgxdamxcuvirzmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088583.799714-1109-168153337736363/AnsiballZ_copy.py'
Jan 22 13:29:44 compute-2 sudo[66082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:44 compute-2 python3.9[66084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088583.799714-1109-168153337736363/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:44 compute-2 sudo[66082]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:45 compute-2 sudo[66234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhxqupcyidfyesxpnhljnytboiejevcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088585.1887827-1154-177474045876685/AnsiballZ_stat.py'
Jan 22 13:29:45 compute-2 sudo[66234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:45 compute-2 python3.9[66236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:45 compute-2 sudo[66234]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:46 compute-2 sudo[66357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usocmzejiuaigjnaxdotglwanwotkqse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088585.1887827-1154-177474045876685/AnsiballZ_copy.py'
Jan 22 13:29:46 compute-2 sudo[66357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:46 compute-2 python3.9[66359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088585.1887827-1154-177474045876685/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:46 compute-2 sudo[66357]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:47 compute-2 sudo[66509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwqahhyiwapjaqpmsmamaufkvuwjjclm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088586.7367969-1199-258533528063595/AnsiballZ_stat.py'
Jan 22 13:29:47 compute-2 sudo[66509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:47 compute-2 python3.9[66511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:47 compute-2 sudo[66509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:47 compute-2 sudo[66632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vytfdacyckyxgpruxlbetlrrqtwkwcmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088586.7367969-1199-258533528063595/AnsiballZ_copy.py'
Jan 22 13:29:47 compute-2 sudo[66632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:47 compute-2 python3.9[66634]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088586.7367969-1199-258533528063595/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:47 compute-2 sudo[66632]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:48 compute-2 sudo[66784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdlmgdtgqpoltapllukpexequppllvft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088588.020991-1244-200085709639151/AnsiballZ_stat.py'
Jan 22 13:29:48 compute-2 sudo[66784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:48 compute-2 python3.9[66786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:48 compute-2 sudo[66784]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:48 compute-2 sudo[66907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jysmhrnmmumqfbjwxjrmurcpydgoowzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088588.020991-1244-200085709639151/AnsiballZ_copy.py'
Jan 22 13:29:48 compute-2 sudo[66907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:49 compute-2 python3.9[66909]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088588.020991-1244-200085709639151/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:49 compute-2 sudo[66907]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:49 compute-2 sudo[67059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teszgbzpehupfaropaoofrdggnqwxxmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088589.3809493-1289-119630266069739/AnsiballZ_stat.py'
Jan 22 13:29:49 compute-2 sudo[67059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:50 compute-2 python3.9[67061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:29:50 compute-2 sudo[67059]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:50 compute-2 sudo[67182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astaaevmzagllruogwlponzleogcoptd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088589.3809493-1289-119630266069739/AnsiballZ_copy.py'
Jan 22 13:29:50 compute-2 sudo[67182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:50 compute-2 python3.9[67184]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088589.3809493-1289-119630266069739/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:50 compute-2 sudo[67182]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:51 compute-2 sudo[67334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covjfhfwsibqgeixztuechmuhjuelimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088590.9111598-1334-224694303762461/AnsiballZ_file.py'
Jan 22 13:29:51 compute-2 sudo[67334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:51 compute-2 python3.9[67336]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:51 compute-2 sudo[67334]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:51 compute-2 sudo[67486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjzpdkniomqvgogjktumpykrdazhywme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088591.5935655-1358-110823311274315/AnsiballZ_command.py'
Jan 22 13:29:51 compute-2 sudo[67486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:52 compute-2 python3.9[67488]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:29:52 compute-2 sudo[67486]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:52 compute-2 sudo[67645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdryfedpelzmctlwmcjaqbqsucqihgep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088592.3887734-1382-166137453648274/AnsiballZ_blockinfile.py'
Jan 22 13:29:52 compute-2 sudo[67645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:53 compute-2 python3.9[67647]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:53 compute-2 sudo[67645]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:53 compute-2 sudo[67798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwalwpeqtdqiqaekemcnytgocsszrnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088593.451679-1409-33065736996788/AnsiballZ_file.py'
Jan 22 13:29:53 compute-2 sudo[67798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:54 compute-2 python3.9[67800]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:54 compute-2 sudo[67798]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:54 compute-2 sudo[67950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nocusnwxfllzwmjlyowxwsrsvtxqhdpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088594.2136269-1409-199149903013519/AnsiballZ_file.py'
Jan 22 13:29:54 compute-2 sudo[67950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:54 compute-2 python3.9[67952]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:29:54 compute-2 sudo[67950]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:55 compute-2 sudo[68102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxqenormwimvkdidwbwbpwfqczaohybv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088595.0131786-1454-28243214582027/AnsiballZ_mount.py'
Jan 22 13:29:55 compute-2 sudo[68102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:55 compute-2 python3.9[68104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:29:55 compute-2 sudo[68102]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:56 compute-2 sudo[68255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdddivavpqxspcozdfrkqnbqithcrvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088596.0264823-1454-245840302951288/AnsiballZ_mount.py'
Jan 22 13:29:56 compute-2 sudo[68255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:29:56 compute-2 python3.9[68257]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:29:56 compute-2 sudo[68255]: pam_unix(sudo:session): session closed for user root
Jan 22 13:29:57 compute-2 sshd-session[59045]: Connection closed by 192.168.122.30 port 34222
Jan 22 13:29:57 compute-2 sshd-session[59042]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:29:57 compute-2 systemd[1]: session-15.scope: Deactivated successfully.
Jan 22 13:29:57 compute-2 systemd[1]: session-15.scope: Consumed 37.675s CPU time.
Jan 22 13:29:57 compute-2 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Jan 22 13:29:57 compute-2 systemd-logind[787]: Removed session 15.
Jan 22 13:30:03 compute-2 sshd-session[68283]: Accepted publickey for zuul from 192.168.122.30 port 41918 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:30:03 compute-2 systemd-logind[787]: New session 16 of user zuul.
Jan 22 13:30:03 compute-2 systemd[1]: Started Session 16 of User zuul.
Jan 22 13:30:03 compute-2 sshd-session[68283]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:04 compute-2 sudo[68436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijrjmvgmdiaqpqdlsijeggboyurpcfpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088604.0510504-25-94618408554113/AnsiballZ_tempfile.py'
Jan 22 13:30:04 compute-2 sudo[68436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:04 compute-2 python3.9[68438]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 13:30:04 compute-2 sudo[68436]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:06 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 13:30:06 compute-2 sudo[68591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sunulgaimefofiluuxiavjthafhmxkjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088606.0220835-62-45431710410823/AnsiballZ_stat.py'
Jan 22 13:30:06 compute-2 sudo[68591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:06 compute-2 python3.9[68593]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:06 compute-2 sudo[68591]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:07 compute-2 sudo[68743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebatwvnyuoiicuixvvrogccfrjtcdrky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088607.1292832-91-179004723556608/AnsiballZ_setup.py'
Jan 22 13:30:07 compute-2 sudo[68743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:08 compute-2 python3.9[68745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:08 compute-2 sudo[68743]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:08 compute-2 sudo[68895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncdutyebzatmqjmfxaprouhnwfzcsdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088608.571374-116-246423430427443/AnsiballZ_blockinfile.py'
Jan 22 13:30:08 compute-2 sudo[68895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:09 compute-2 python3.9[68897]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=
                                             create=True mode=0644 path=/tmp/ansible.3fn2oeoe state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:30:09 compute-2 sudo[68895]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:09 compute-2 sudo[69047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzxuucvcimysddlczpkxldhqdkavtmat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088609.3912275-141-266301351767296/AnsiballZ_command.py'
Jan 22 13:30:09 compute-2 sudo[69047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:10 compute-2 python3.9[69049]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3fn2oeoe' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:10 compute-2 sudo[69047]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:10 compute-2 sudo[69201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkpatjpbznstfaigbksowlbbskqpzibs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088610.2749946-164-213545588539263/AnsiballZ_file.py'
Jan 22 13:30:10 compute-2 sudo[69201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:10 compute-2 python3.9[69203]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3fn2oeoe state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:30:10 compute-2 sudo[69201]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:11 compute-2 sshd-session[68286]: Connection closed by 192.168.122.30 port 41918
Jan 22 13:30:11 compute-2 sshd-session[68283]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:30:11 compute-2 systemd[1]: session-16.scope: Deactivated successfully.
Jan 22 13:30:11 compute-2 systemd[1]: session-16.scope: Consumed 3.552s CPU time.
Jan 22 13:30:11 compute-2 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Jan 22 13:30:11 compute-2 systemd-logind[787]: Removed session 16.
Jan 22 13:30:17 compute-2 sshd-session[69228]: Accepted publickey for zuul from 192.168.122.30 port 52730 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:30:17 compute-2 systemd-logind[787]: New session 17 of user zuul.
Jan 22 13:30:17 compute-2 systemd[1]: Started Session 17 of User zuul.
Jan 22 13:30:17 compute-2 sshd-session[69228]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:18 compute-2 python3.9[69381]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:19 compute-2 sudo[69535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyksmrkzwkraggdhhifskkgcjrkrcqbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088618.8036575-59-167298750327860/AnsiballZ_systemd.py'
Jan 22 13:30:19 compute-2 sudo[69535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:20 compute-2 python3.9[69537]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 13:30:20 compute-2 sudo[69535]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:20 compute-2 sudo[69689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjrjlkxrzaqzbesygapgdsaggchgoffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088620.2664318-82-130541106372581/AnsiballZ_systemd.py'
Jan 22 13:30:20 compute-2 sudo[69689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:20 compute-2 python3.9[69691]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:30:20 compute-2 sudo[69689]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:21 compute-2 sudo[69842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjzhnwlnifzkoxwkptjsqjrtjbhxwfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088621.2421777-109-50651260449884/AnsiballZ_command.py'
Jan 22 13:30:21 compute-2 sudo[69842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:21 compute-2 python3.9[69844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:21 compute-2 sudo[69842]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:22 compute-2 sudo[69995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcwudrhuxuhchpglnhouachriiomyjkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088622.100103-133-120944653729370/AnsiballZ_stat.py'
Jan 22 13:30:22 compute-2 sudo[69995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:22 compute-2 python3.9[69997]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:22 compute-2 sudo[69995]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:23 compute-2 sudo[70149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiyodipizwglwznjavhfpxuhmsjmkcaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088622.9867618-157-139596504140803/AnsiballZ_command.py'
Jan 22 13:30:23 compute-2 sudo[70149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:23 compute-2 python3.9[70151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:23 compute-2 sudo[70149]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:24 compute-2 sudo[70304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcmiplvlnliyhrbhwodqmfxxzcbxbldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088623.7054226-181-220115354709579/AnsiballZ_file.py'
Jan 22 13:30:24 compute-2 sudo[70304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:24 compute-2 python3.9[70306]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:30:24 compute-2 sudo[70304]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:24 compute-2 sshd-session[69231]: Connection closed by 192.168.122.30 port 52730
Jan 22 13:30:24 compute-2 sshd-session[69228]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:30:24 compute-2 systemd[1]: session-17.scope: Deactivated successfully.
Jan 22 13:30:24 compute-2 systemd[1]: session-17.scope: Consumed 4.176s CPU time.
Jan 22 13:30:24 compute-2 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Jan 22 13:30:24 compute-2 systemd-logind[787]: Removed session 17.
Jan 22 13:30:30 compute-2 sshd-session[70331]: Accepted publickey for zuul from 192.168.122.30 port 44074 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:30:30 compute-2 systemd-logind[787]: New session 18 of user zuul.
Jan 22 13:30:30 compute-2 systemd[1]: Started Session 18 of User zuul.
Jan 22 13:30:30 compute-2 sshd-session[70331]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:31 compute-2 python3.9[70484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:32 compute-2 sudo[70638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhsuscvdebtuuffpqoqadblnskbxrhcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088632.0678976-64-74556308432818/AnsiballZ_setup.py'
Jan 22 13:30:32 compute-2 sudo[70638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:32 compute-2 python3.9[70640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:30:33 compute-2 sudo[70638]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:33 compute-2 sudo[70722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzchcpuwnhkagnhoduqqxpwkzfjfaqzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769088632.0678976-64-74556308432818/AnsiballZ_dnf.py'
Jan 22 13:30:33 compute-2 sudo[70722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:33 compute-2 python3.9[70724]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:30:34 compute-2 sudo[70722]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:36 compute-2 python3.9[70875]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:37 compute-2 python3.9[71026]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:30:38 compute-2 python3.9[71176]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:38 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:30:39 compute-2 python3.9[71327]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:30:39 compute-2 sshd-session[70334]: Connection closed by 192.168.122.30 port 44074
Jan 22 13:30:40 compute-2 sshd-session[70331]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:30:40 compute-2 systemd[1]: session-18.scope: Deactivated successfully.
Jan 22 13:30:40 compute-2 systemd[1]: session-18.scope: Consumed 6.169s CPU time.
Jan 22 13:30:40 compute-2 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Jan 22 13:30:40 compute-2 systemd-logind[787]: Removed session 18.
Jan 22 13:30:44 compute-2 chronyd[58561]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 22 13:30:48 compute-2 sshd-session[71352]: Accepted publickey for zuul from 38.102.83.41 port 45612 ssh2: RSA SHA256:TuAhGULDfe9nJAKjmqaszwyLr0Lzzf2znQ+Nnm8F8LU
Jan 22 13:30:48 compute-2 systemd-logind[787]: New session 19 of user zuul.
Jan 22 13:30:48 compute-2 systemd[1]: Started Session 19 of User zuul.
Jan 22 13:30:48 compute-2 sshd-session[71352]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:30:48 compute-2 sudo[71428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtteelsyyoffywdweldftzflvnmilriz ; /usr/bin/python3'
Jan 22 13:30:48 compute-2 sudo[71428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:49 compute-2 useradd[71432]: new group: name=ceph-admin, GID=42478
Jan 22 13:30:49 compute-2 useradd[71432]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 22 13:30:49 compute-2 sudo[71428]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:50 compute-2 sudo[71514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puugovgwvwjztjicfgsazauqomlfbwsp ; /usr/bin/python3'
Jan 22 13:30:50 compute-2 sudo[71514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:50 compute-2 sudo[71514]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:50 compute-2 sudo[71587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dllkgxesanhnlwlssjqfrvxktkeqjsru ; /usr/bin/python3'
Jan 22 13:30:50 compute-2 sudo[71587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:50 compute-2 sudo[71587]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:51 compute-2 sudo[71637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euvktiviftjebpcagwtixgwwpqsgxluz ; /usr/bin/python3'
Jan 22 13:30:51 compute-2 sudo[71637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:51 compute-2 sudo[71637]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:51 compute-2 sudo[71663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdvosqixjcdaknmkkyjyabwirziqazx ; /usr/bin/python3'
Jan 22 13:30:51 compute-2 sudo[71663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:51 compute-2 sudo[71663]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:52 compute-2 sudo[71689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngmofbfyshxlpwpjvaqbtsmpwfntwavy ; /usr/bin/python3'
Jan 22 13:30:52 compute-2 sudo[71689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:52 compute-2 sudo[71689]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:52 compute-2 sudo[71715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giclmznmckazurjurwfddplgaitxptbn ; /usr/bin/python3'
Jan 22 13:30:52 compute-2 sudo[71715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:52 compute-2 sudo[71715]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:53 compute-2 sudo[71793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdtybibgmujjirfwucezfiqshqgthyja ; /usr/bin/python3'
Jan 22 13:30:53 compute-2 sudo[71793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:53 compute-2 sudo[71793]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:53 compute-2 sudo[71866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxcvcgqbrwzitydptlnwbtwypwwcxqsc ; /usr/bin/python3'
Jan 22 13:30:53 compute-2 sudo[71866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:53 compute-2 sudo[71866]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:54 compute-2 sudo[71968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngfkpbxaxdhoyhzspxxchpxwufhqnzhy ; /usr/bin/python3'
Jan 22 13:30:54 compute-2 sudo[71968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:54 compute-2 sudo[71968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:54 compute-2 sudo[72041]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtotphgrpbedplfwaespacaupknbgweo ; /usr/bin/python3'
Jan 22 13:30:54 compute-2 sudo[72041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:54 compute-2 sudo[72041]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:55 compute-2 sudo[72091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdjhfpophwgwduwrgnnnecvegwqrpuyh ; /usr/bin/python3'
Jan 22 13:30:55 compute-2 sudo[72091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:55 compute-2 python3[72093]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:30:56 compute-2 sudo[72091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:57 compute-2 sudo[72186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyasxakgdcgzekomuhslvpctlkbwdekj ; /usr/bin/python3'
Jan 22 13:30:57 compute-2 sudo[72186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:57 compute-2 python3[72188]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 13:30:58 compute-2 sudo[72186]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:58 compute-2 sudo[72213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymiobrkfmshbwxoatejjozsszahyhoa ; /usr/bin/python3'
Jan 22 13:30:58 compute-2 sudo[72213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:59 compute-2 python3[72215]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 13:30:59 compute-2 sudo[72213]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:59 compute-2 sudo[72239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifwhmesfhufcfbfoenlstuexfjyvhiwa ; /usr/bin/python3'
Jan 22 13:30:59 compute-2 sudo[72239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:30:59 compute-2 python3[72241]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:30:59 compute-2 kernel: loop: module loaded
Jan 22 13:30:59 compute-2 kernel: loop3: detected capacity change from 0 to 14680064
Jan 22 13:30:59 compute-2 sudo[72239]: pam_unix(sudo:session): session closed for user root
Jan 22 13:30:59 compute-2 sudo[72273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwklyxdzycevgkyhnwyzuhtaytzfttk ; /usr/bin/python3'
Jan 22 13:30:59 compute-2 sudo[72273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:00 compute-2 python3[72275]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:31:00 compute-2 lvm[72278]: PV /dev/loop3 not used.
Jan 22 13:31:00 compute-2 lvm[72280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:31:00 compute-2 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 22 13:31:00 compute-2 lvm[72290]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:31:00 compute-2 lvm[72290]: VG ceph_vg0 finished
Jan 22 13:31:00 compute-2 lvm[72287]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 22 13:31:00 compute-2 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 22 13:31:00 compute-2 sudo[72273]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:00 compute-2 sudo[72366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzdppwvrjczfhnlyhgvswjmgvhzyetbl ; /usr/bin/python3'
Jan 22 13:31:00 compute-2 sudo[72366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:00 compute-2 python3[72368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 13:31:00 compute-2 sudo[72366]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:01 compute-2 sudo[72439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cduxxetkkohbyivgdiivvinnhsmhidvl ; /usr/bin/python3'
Jan 22 13:31:01 compute-2 sudo[72439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:01 compute-2 python3[72441]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088660.6088269-37031-193881875744097/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:31:01 compute-2 sudo[72439]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:01 compute-2 sudo[72489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-futwijvckahoindmfybuurtqshkwleqb ; /usr/bin/python3'
Jan 22 13:31:01 compute-2 sudo[72489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:31:02 compute-2 python3[72491]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:31:02 compute-2 systemd[1]: Reloading.
Jan 22 13:31:02 compute-2 systemd-rc-local-generator[72517]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:31:02 compute-2 systemd-sysv-generator[72522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:31:02 compute-2 systemd[1]: Starting Ceph OSD losetup...
Jan 22 13:31:02 compute-2 bash[72531]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 22 13:31:02 compute-2 systemd[1]: Finished Ceph OSD losetup.
Jan 22 13:31:02 compute-2 lvm[72532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:31:02 compute-2 lvm[72532]: VG ceph_vg0 finished
Jan 22 13:31:02 compute-2 sudo[72489]: pam_unix(sudo:session): session closed for user root
Jan 22 13:31:04 compute-2 python3[72556]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:31:16 compute-2 sshd-session[72600]: Invalid user sol from 45.148.10.240 port 35200
Jan 22 13:31:16 compute-2 sshd-session[72600]: Connection closed by invalid user sol 45.148.10.240 port 35200 [preauth]
Jan 22 13:31:52 compute-2 sshd-session[72603]: Connection closed by 92.118.39.95 port 51048
Jan 22 13:33:33 compute-2 sshd-session[72606]: Accepted publickey for ceph-admin from 192.168.122.100 port 43866 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:33 compute-2 systemd-logind[787]: New session 20 of user ceph-admin.
Jan 22 13:33:33 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 13:33:33 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 13:33:33 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 13:33:33 compute-2 systemd[1]: Starting User Manager for UID 42477...
Jan 22 13:33:33 compute-2 systemd[72610]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:33 compute-2 systemd[72610]: Queued start job for default target Main User Target.
Jan 22 13:33:33 compute-2 systemd[72610]: Created slice User Application Slice.
Jan 22 13:33:33 compute-2 systemd[72610]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 13:33:33 compute-2 systemd[72610]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 13:33:33 compute-2 systemd[72610]: Reached target Paths.
Jan 22 13:33:33 compute-2 systemd[72610]: Reached target Timers.
Jan 22 13:33:33 compute-2 systemd[72610]: Starting D-Bus User Message Bus Socket...
Jan 22 13:33:33 compute-2 systemd[72610]: Starting Create User's Volatile Files and Directories...
Jan 22 13:33:33 compute-2 systemd[72610]: Listening on D-Bus User Message Bus Socket.
Jan 22 13:33:33 compute-2 systemd[72610]: Finished Create User's Volatile Files and Directories.
Jan 22 13:33:33 compute-2 systemd[72610]: Reached target Sockets.
Jan 22 13:33:33 compute-2 systemd[72610]: Reached target Basic System.
Jan 22 13:33:33 compute-2 systemd[72610]: Reached target Main User Target.
Jan 22 13:33:33 compute-2 systemd[72610]: Startup finished in 111ms.
Jan 22 13:33:33 compute-2 systemd[1]: Started User Manager for UID 42477.
Jan 22 13:33:33 compute-2 sshd-session[72623]: Accepted publickey for ceph-admin from 192.168.122.100 port 43868 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:33 compute-2 systemd[1]: Started Session 20 of User ceph-admin.
Jan 22 13:33:33 compute-2 sshd-session[72604]: Invalid user sol from 45.148.10.240 port 49312
Jan 22 13:33:33 compute-2 sshd-session[72606]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:33 compute-2 systemd-logind[787]: New session 22 of user ceph-admin.
Jan 22 13:33:33 compute-2 systemd[1]: Started Session 22 of User ceph-admin.
Jan 22 13:33:33 compute-2 sshd-session[72623]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:33 compute-2 sshd-session[72604]: Connection closed by invalid user sol 45.148.10.240 port 49312 [preauth]
Jan 22 13:33:33 compute-2 sudo[72630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:33 compute-2 sudo[72630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-2 sudo[72630]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-2 sudo[72655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:33:33 compute-2 sudo[72655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-2 sudo[72655]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-2 sshd-session[72680]: Accepted publickey for ceph-admin from 192.168.122.100 port 43878 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:33 compute-2 systemd-logind[787]: New session 23 of user ceph-admin.
Jan 22 13:33:33 compute-2 systemd[1]: Started Session 23 of User ceph-admin.
Jan 22 13:33:33 compute-2 sshd-session[72680]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:33 compute-2 sudo[72684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:33 compute-2 sudo[72684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-2 sudo[72684]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:33 compute-2 sudo[72709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-2
Jan 22 13:33:33 compute-2 sudo[72709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:33 compute-2 sudo[72709]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-2 sshd-session[72734]: Accepted publickey for ceph-admin from 192.168.122.100 port 43886 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:34 compute-2 systemd-logind[787]: New session 24 of user ceph-admin.
Jan 22 13:33:34 compute-2 systemd[1]: Started Session 24 of User ceph-admin.
Jan 22 13:33:34 compute-2 sshd-session[72734]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:34 compute-2 sudo[72738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:34 compute-2 sudo[72738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-2 sudo[72738]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-2 sudo[72763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 22 13:33:34 compute-2 sudo[72763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-2 sudo[72763]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-2 sshd-session[72788]: Accepted publickey for ceph-admin from 192.168.122.100 port 43900 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:34 compute-2 systemd-logind[787]: New session 25 of user ceph-admin.
Jan 22 13:33:34 compute-2 systemd[1]: Started Session 25 of User ceph-admin.
Jan 22 13:33:34 compute-2 sshd-session[72788]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:34 compute-2 sudo[72792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:34 compute-2 sudo[72792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-2 sudo[72792]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-2 sudo[72817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:33:34 compute-2 sudo[72817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-2 sudo[72817]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-2 sshd-session[72842]: Accepted publickey for ceph-admin from 192.168.122.100 port 43916 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:34 compute-2 systemd-logind[787]: New session 26 of user ceph-admin.
Jan 22 13:33:34 compute-2 systemd[1]: Started Session 26 of User ceph-admin.
Jan 22 13:33:34 compute-2 sshd-session[72842]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:34 compute-2 sudo[72846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:34 compute-2 sudo[72846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-2 sudo[72846]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:34 compute-2 sudo[72871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:33:34 compute-2 sudo[72871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:34 compute-2 sudo[72871]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-2 sshd-session[72896]: Accepted publickey for ceph-admin from 192.168.122.100 port 43932 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:35 compute-2 systemd-logind[787]: New session 27 of user ceph-admin.
Jan 22 13:33:35 compute-2 systemd[1]: Started Session 27 of User ceph-admin.
Jan 22 13:33:35 compute-2 sshd-session[72896]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:35 compute-2 sudo[72900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:35 compute-2 sudo[72900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-2 sudo[72900]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-2 sudo[72925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 22 13:33:35 compute-2 sudo[72925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-2 sudo[72925]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-2 sshd-session[72950]: Accepted publickey for ceph-admin from 192.168.122.100 port 43934 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:35 compute-2 systemd-logind[787]: New session 28 of user ceph-admin.
Jan 22 13:33:35 compute-2 systemd[1]: Started Session 28 of User ceph-admin.
Jan 22 13:33:35 compute-2 sshd-session[72950]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:35 compute-2 sudo[72954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:35 compute-2 sudo[72954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-2 sudo[72954]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:35 compute-2 sudo[72979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:33:35 compute-2 sudo[72979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:35 compute-2 sudo[72979]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:36 compute-2 sshd-session[73004]: Accepted publickey for ceph-admin from 192.168.122.100 port 43946 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:36 compute-2 systemd-logind[787]: New session 29 of user ceph-admin.
Jan 22 13:33:36 compute-2 systemd[1]: Started Session 29 of User ceph-admin.
Jan 22 13:33:36 compute-2 sshd-session[73004]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:36 compute-2 sudo[73008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:36 compute-2 sudo[73008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:36 compute-2 sudo[73008]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:36 compute-2 sudo[73033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 22 13:33:36 compute-2 sudo[73033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:36 compute-2 sudo[73033]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:36 compute-2 sshd-session[73058]: Accepted publickey for ceph-admin from 192.168.122.100 port 43954 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:36 compute-2 systemd-logind[787]: New session 30 of user ceph-admin.
Jan 22 13:33:36 compute-2 systemd[1]: Started Session 30 of User ceph-admin.
Jan 22 13:33:36 compute-2 sshd-session[73058]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:37 compute-2 sshd-session[73085]: Accepted publickey for ceph-admin from 192.168.122.100 port 43968 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:37 compute-2 systemd-logind[787]: New session 31 of user ceph-admin.
Jan 22 13:33:37 compute-2 systemd[1]: Started Session 31 of User ceph-admin.
Jan 22 13:33:37 compute-2 sshd-session[73085]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:37 compute-2 sudo[73089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:37 compute-2 sudo[73089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:37 compute-2 sudo[73089]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:37 compute-2 sudo[73114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 22 13:33:37 compute-2 sudo[73114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:37 compute-2 sudo[73114]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:37 compute-2 sshd-session[73139]: Accepted publickey for ceph-admin from 192.168.122.100 port 43972 ssh2: RSA SHA256:BUfpvrJ7dTHhz9/llaOCxKzyoNvclvQPLoh5j4/yedI
Jan 22 13:33:37 compute-2 systemd-logind[787]: New session 32 of user ceph-admin.
Jan 22 13:33:37 compute-2 systemd[1]: Started Session 32 of User ceph-admin.
Jan 22 13:33:37 compute-2 sshd-session[73139]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 22 13:33:37 compute-2 sudo[73143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:33:37 compute-2 sudo[73143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:37 compute-2 sudo[73143]: pam_unix(sudo:session): session closed for user root
Jan 22 13:33:37 compute-2 sudo[73168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-2
Jan 22 13:33:37 compute-2 sudo[73168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:33:37 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:33:38 compute-2 sudo[73168]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-2 sudo[73213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:36 compute-2 sudo[73213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-2 sudo[73213]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-2 sudo[73238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:34:36 compute-2 sudo[73238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-2 sudo[73238]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-2 sudo[73263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:36 compute-2 sudo[73263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-2 sudo[73263]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-2 sudo[73288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:36 compute-2 sudo[73288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-2 sudo[73288]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-2 sudo[73313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:36 compute-2 sudo[73313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-2 sudo[73313]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:36 compute-2 sudo[73338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 13:34:36 compute-2 sudo[73338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:36 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:36 compute-2 sudo[73338]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:37 compute-2 sudo[73383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 sudo[73383]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:37 compute-2 sudo[73408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 sudo[73408]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:37 compute-2 sudo[73433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 sudo[73433]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:34:37 compute-2 sudo[73458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:37 compute-2 sudo[73458]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:37 compute-2 sudo[73521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 sudo[73521]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:37 compute-2 sudo[73546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 sudo[73546]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:37 compute-2 sudo[73571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 sudo[73571]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:37 compute-2 sudo[73596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:34:37 compute-2 sudo[73596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:37 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:37 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:37 compute-2 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73633 (sysctl)
Jan 22 13:34:38 compute-2 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 22 13:34:38 compute-2 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 22 13:34:38 compute-2 sudo[73596]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-2 sudo[73655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:38 compute-2 sudo[73655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-2 sudo[73655]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-2 sudo[73680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:38 compute-2 sudo[73680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-2 sudo[73680]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-2 sudo[73705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:38 compute-2 sudo[73705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-2 sudo[73705]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:38 compute-2 sudo[73730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 13:34:38 compute-2 sudo[73730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:38 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:38 compute-2 sudo[73730]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:39 compute-2 sudo[73773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:39 compute-2 sudo[73773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:39 compute-2 sudo[73773]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:39 compute-2 sudo[73798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:34:39 compute-2 sudo[73798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:39 compute-2 sudo[73798]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:39 compute-2 sudo[73823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:34:39 compute-2 sudo[73823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:39 compute-2 sudo[73823]: pam_unix(sudo:session): session closed for user root
Jan 22 13:34:39 compute-2 sudo[73848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 13:34:39 compute-2 sudo[73848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:34:39 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:34:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat1386398941-lower\x2dmapped.mount: Deactivated successfully.
Jan 22 13:35:16 compute-2 podman[73910]: 2026-01-22 13:35:16.419786263 +0000 UTC m=+36.911080043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:16 compute-2 podman[73910]: 2026-01-22 13:35:16.802841286 +0000 UTC m=+37.294135046 container create 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 13:35:17 compute-2 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 22 13:35:17 compute-2 systemd[1]: Started libpod-conmon-858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4.scope.
Jan 22 13:35:17 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:17 compute-2 podman[73910]: 2026-01-22 13:35:17.775524378 +0000 UTC m=+38.266818168 container init 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 13:35:17 compute-2 podman[73910]: 2026-01-22 13:35:17.783054284 +0000 UTC m=+38.274348084 container start 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:35:17 compute-2 gallant_kalam[73980]: 167 167
Jan 22 13:35:17 compute-2 systemd[1]: libpod-858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4.scope: Deactivated successfully.
Jan 22 13:35:17 compute-2 podman[73910]: 2026-01-22 13:35:17.948896878 +0000 UTC m=+38.440190668 container attach 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 22 13:35:17 compute-2 podman[73910]: 2026-01-22 13:35:17.949568745 +0000 UTC m=+38.440862525 container died 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 13:35:18 compute-2 systemd[1]: var-lib-containers-storage-overlay-3f5a7ef4872511dbce92abc0bd3d0bd2f6a1fed938990b49cced862a76caf8d8-merged.mount: Deactivated successfully.
Jan 22 13:35:18 compute-2 podman[73910]: 2026-01-22 13:35:18.710811987 +0000 UTC m=+39.202105757 container remove 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 13:35:18 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:18 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:18 compute-2 systemd[1]: libpod-conmon-858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4.scope: Deactivated successfully.
Jan 22 13:35:18 compute-2 podman[74003]: 2026-01-22 13:35:18.856802494 +0000 UTC m=+0.042427084 container create 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:35:18 compute-2 systemd[1]: Started libpod-conmon-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope.
Jan 22 13:35:18 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48db2295397123c3951b3f86cc289f28156c04b273da95798f8c6f01aaf697e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48db2295397123c3951b3f86cc289f28156c04b273da95798f8c6f01aaf697e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:18 compute-2 podman[74003]: 2026-01-22 13:35:18.834581436 +0000 UTC m=+0.020206046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:18 compute-2 podman[74003]: 2026-01-22 13:35:18.93505505 +0000 UTC m=+0.120679670 container init 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 13:35:18 compute-2 podman[74003]: 2026-01-22 13:35:18.941403785 +0000 UTC m=+0.127028375 container start 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 13:35:18 compute-2 podman[74003]: 2026-01-22 13:35:18.945990344 +0000 UTC m=+0.131614964 container attach 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:20 compute-2 charming_davinci[74019]: [
Jan 22 13:35:20 compute-2 charming_davinci[74019]:     {
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "available": false,
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "ceph_device": false,
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "lsm_data": {},
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "lvs": [],
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "path": "/dev/sr0",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "rejected_reasons": [
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "Insufficient space (<5GB)",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "Has a FileSystem"
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         ],
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         "sys_api": {
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "actuators": null,
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "device_nodes": "sr0",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "devname": "sr0",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "human_readable_size": "482.00 KB",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "id_bus": "ata",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "model": "QEMU DVD-ROM",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "nr_requests": "2",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "parent": "/dev/sr0",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "partitions": {},
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "path": "/dev/sr0",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "removable": "1",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "rev": "2.5+",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "ro": "0",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "rotational": "1",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "sas_address": "",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "sas_device_handle": "",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "scheduler_mode": "mq-deadline",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "sectors": 0,
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "sectorsize": "2048",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "size": 493568.0,
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "support_discard": "2048",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "type": "disk",
Jan 22 13:35:20 compute-2 charming_davinci[74019]:             "vendor": "QEMU"
Jan 22 13:35:20 compute-2 charming_davinci[74019]:         }
Jan 22 13:35:20 compute-2 charming_davinci[74019]:     }
Jan 22 13:35:20 compute-2 charming_davinci[74019]: ]
Jan 22 13:35:20 compute-2 systemd[1]: libpod-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope: Deactivated successfully.
Jan 22 13:35:20 compute-2 systemd[1]: libpod-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope: Consumed 1.131s CPU time.
Jan 22 13:35:20 compute-2 podman[74003]: 2026-01-22 13:35:20.071853791 +0000 UTC m=+1.257478391 container died 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 13:35:20 compute-2 systemd[1]: var-lib-containers-storage-overlay-a48db2295397123c3951b3f86cc289f28156c04b273da95798f8c6f01aaf697e-merged.mount: Deactivated successfully.
Jan 22 13:35:20 compute-2 podman[74003]: 2026-01-22 13:35:20.448441216 +0000 UTC m=+1.634065806 container remove 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 13:35:20 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:20 compute-2 systemd[1]: libpod-conmon-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope: Deactivated successfully.
Jan 22 13:35:20 compute-2 sudo[73848]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:20 compute-2 sudo[74897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:20 compute-2 sudo[74897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:20 compute-2 sudo[74897]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[74922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 22 13:35:21 compute-2 sudo[74922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[74922]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[74947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:21 compute-2 sudo[74947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[74947]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[74972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph
Jan 22 13:35:21 compute-2 sudo[74972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[74972]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[74997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:21 compute-2 sudo[74997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[74997]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:35:21 compute-2 sudo[75022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75022]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:21 compute-2 sudo[75047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75047]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:21 compute-2 sudo[75072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75072]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:21 compute-2 sudo[75097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75097]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:35:21 compute-2 sudo[75122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75122]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:21 compute-2 sudo[75170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75170]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:35:21 compute-2 sudo[75195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75195]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:21 compute-2 sudo[75220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75220]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:21 compute-2 sudo[75245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:35:21 compute-2 sudo[75245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:21 compute-2 sudo[75245]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75270]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 22 13:35:22 compute-2 sudo[75295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75295]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75320]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:35:22 compute-2 sudo[75345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75345]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75370]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:35:22 compute-2 sudo[75395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75395]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75420]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:35:22 compute-2 sudo[75445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75445]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75470]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:22 compute-2 sudo[75495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75495]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75520]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:35:22 compute-2 sudo[75545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75545]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75593]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:35:22 compute-2 sudo[75618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75618]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:22 compute-2 sudo[75643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75643]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:22 compute-2 sudo[75668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:35:22 compute-2 sudo[75668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:22 compute-2 sudo[75668]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[75693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75693]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:35:23 compute-2 sudo[75718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75718]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[75743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75743]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 22 13:35:23 compute-2 sudo[75768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75768]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[75793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75793]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph
Jan 22 13:35:23 compute-2 sudo[75818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75818]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[75843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75843]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:35:23 compute-2 sudo[75868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75868]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[75893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75893]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:23 compute-2 sudo[75918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75918]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[75943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75943]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[75968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:35:23 compute-2 sudo[75968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[75968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[76016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:23 compute-2 sudo[76016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[76016]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:23 compute-2 sudo[76041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:35:23 compute-2 sudo[76041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:23 compute-2 sudo[76041]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76066]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new
Jan 22 13:35:24 compute-2 sudo[76091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76116]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 22 13:35:24 compute-2 sudo[76141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76141]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76166]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:35:24 compute-2 sudo[76191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76191]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76216]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:35:24 compute-2 sudo[76241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76241]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76266]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:35:24 compute-2 sudo[76291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76291]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76316]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:24 compute-2 sudo[76341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76341]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76366]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:35:24 compute-2 sudo[76391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76391]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:24 compute-2 sudo[76439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:24 compute-2 sudo[76439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:24 compute-2 sudo[76439]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:35:25 compute-2 sudo[76464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76464]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:25 compute-2 sudo[76489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76489]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new
Jan 22 13:35:25 compute-2 sudo[76514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76514]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:25 compute-2 sudo[76539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76539]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 13:35:25 compute-2 sudo[76564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76564]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:25 compute-2 sudo[76589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76589]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:35:25 compute-2 sudo[76614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76614]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:25 compute-2 sudo[76639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 sudo[76639]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:25 compute-2 sudo[76664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:25 compute-2 sudo[76664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:25 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:25 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:25 compute-2 podman[76726]: 2026-01-22 13:35:25.96217771 +0000 UTC m=+0.045359261 container create 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:35:25 compute-2 systemd[1]: Started libpod-conmon-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope.
Jan 22 13:35:26 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:26 compute-2 podman[76726]: 2026-01-22 13:35:26.030144878 +0000 UTC m=+0.113326469 container init 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:26 compute-2 podman[76726]: 2026-01-22 13:35:26.038491945 +0000 UTC m=+0.121673496 container start 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:26 compute-2 podman[76726]: 2026-01-22 13:35:25.94178901 +0000 UTC m=+0.024970581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:26 compute-2 great_golick[76742]: 167 167
Jan 22 13:35:26 compute-2 podman[76726]: 2026-01-22 13:35:26.043234579 +0000 UTC m=+0.126416130 container attach 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:35:26 compute-2 systemd[1]: libpod-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope: Deactivated successfully.
Jan 22 13:35:26 compute-2 conmon[76742]: conmon 2ed7f5a80cbdd333dc0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope/container/memory.events
Jan 22 13:35:26 compute-2 podman[76748]: 2026-01-22 13:35:26.091824703 +0000 UTC m=+0.026211203 container died 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 13:35:26 compute-2 podman[76748]: 2026-01-22 13:35:26.129467472 +0000 UTC m=+0.063853952 container remove 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 13:35:26 compute-2 systemd[1]: libpod-conmon-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope: Deactivated successfully.
Jan 22 13:35:26 compute-2 podman[76765]: 2026-01-22 13:35:26.21126803 +0000 UTC m=+0.044827088 container create 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:35:26 compute-2 systemd[1]: Started libpod-conmon-2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02.scope.
Jan 22 13:35:26 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:26 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:26 compute-2 podman[76765]: 2026-01-22 13:35:26.191347411 +0000 UTC m=+0.024906499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:26 compute-2 podman[76765]: 2026-01-22 13:35:26.289369621 +0000 UTC m=+0.122928689 container init 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 13:35:26 compute-2 podman[76765]: 2026-01-22 13:35:26.297378059 +0000 UTC m=+0.130937117 container start 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:35:26 compute-2 podman[76765]: 2026-01-22 13:35:26.301401574 +0000 UTC m=+0.134960632 container attach 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 13:35:27 compute-2 systemd[1]: libpod-2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02.scope: Deactivated successfully.
Jan 22 13:35:27 compute-2 podman[76765]: 2026-01-22 13:35:27.308958484 +0000 UTC m=+1.142517552 container died 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 13:35:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381-merged.mount: Deactivated successfully.
Jan 22 13:35:27 compute-2 podman[76765]: 2026-01-22 13:35:27.380561775 +0000 UTC m=+1.214120833 container remove 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 22 13:35:27 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:27 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:27 compute-2 systemd[1]: libpod-conmon-2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02.scope: Deactivated successfully.
Jan 22 13:35:27 compute-2 systemd[1]: Reloading.
Jan 22 13:35:27 compute-2 systemd-rc-local-generator[76847]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:27 compute-2 systemd-sysv-generator[76852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:27 compute-2 systemd[1]: Reloading.
Jan 22 13:35:27 compute-2 systemd-rc-local-generator[76885]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:27 compute-2 systemd-sysv-generator[76889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:27 compute-2 systemd[1]: Reached target All Ceph clusters and services.
Jan 22 13:35:27 compute-2 systemd[1]: Reloading.
Jan 22 13:35:27 compute-2 systemd-rc-local-generator[76924]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:27 compute-2 systemd-sysv-generator[76928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:28 compute-2 systemd[1]: Reached target Ceph cluster 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:28 compute-2 systemd[1]: Reloading.
Jan 22 13:35:28 compute-2 systemd-rc-local-generator[76964]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:28 compute-2 systemd-sysv-generator[76968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:28 compute-2 systemd[1]: Reloading.
Jan 22 13:35:28 compute-2 systemd-rc-local-generator[77002]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:28 compute-2 systemd-sysv-generator[77007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:28 compute-2 systemd[1]: Created slice Slice /system/ceph-088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:28 compute-2 systemd[1]: Reached target System Time Set.
Jan 22 13:35:28 compute-2 systemd[1]: Reached target System Time Synchronized.
Jan 22 13:35:28 compute-2 systemd[1]: Starting Ceph mon.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:35:28 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:28 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 13:35:28 compute-2 podman[77062]: 2026-01-22 13:35:28.868786757 +0000 UTC m=+0.037456426 container create ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 13:35:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6269f9312632c62e86d13c965ce5e4ccf9b1ba9a87f9e29364ed084fe61c1572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6269f9312632c62e86d13c965ce5e4ccf9b1ba9a87f9e29364ed084fe61c1572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6269f9312632c62e86d13c965ce5e4ccf9b1ba9a87f9e29364ed084fe61c1572/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:28 compute-2 podman[77062]: 2026-01-22 13:35:28.925749809 +0000 UTC m=+0.094419508 container init ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 13:35:28 compute-2 podman[77062]: 2026-01-22 13:35:28.93193866 +0000 UTC m=+0.100608329 container start ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 13:35:28 compute-2 bash[77062]: ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6
Jan 22 13:35:28 compute-2 podman[77062]: 2026-01-22 13:35:28.853253673 +0000 UTC m=+0.021923362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:28 compute-2 systemd[1]: Started Ceph mon.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:28 compute-2 ceph-mon[77081]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:35:28 compute-2 ceph-mon[77081]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: pidfile_write: ignore empty --pid-file
Jan 22 13:35:28 compute-2 ceph-mon[77081]: load: jerasure load: lrc 
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: RocksDB version: 7.9.2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Git sha 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: DB SUMMARY
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: DB Session ID:  HOKNYZUMFPVI0T4U6KMU
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: CURRENT file:  CURRENT
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                         Options.error_if_exists: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.create_if_missing: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                                     Options.env: 0x55f4cd06bc40
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                                Options.info_log: 0x55f4cf3a0fc0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                              Options.statistics: (nil)
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                               Options.use_fsync: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                              Options.db_log_dir: 
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                                 Options.wal_dir: 
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                    Options.write_buffer_manager: 0x55f4cf3b0b40
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.unordered_write: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                               Options.row_cache: None
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                              Options.wal_filter: None
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.two_write_queues: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.wal_compression: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.atomic_flush: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.max_background_jobs: 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.max_background_compactions: -1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.max_subcompactions: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                          Options.max_open_files: -1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Compression algorithms supported:
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kZSTD supported: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kXpressCompression supported: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kBZip2Compression supported: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kLZ4Compression supported: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kZlibCompression supported: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kLZ4HCCompression supported: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         kSnappyCompression supported: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:           Options.merge_operator: 
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:        Options.compaction_filter: None
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4cf3a0c00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f4cf3991f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.compression: NoCompression
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.num_levels: 7
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2fc6eab8-1992-4005-a2ff-000040659fe1
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088928983160, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088928986627, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088928986799, "job": 1, "event": "recovery_finished"}
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f4cf3c2e00
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: DB pointer 0x55f4cf44c000
Jan 22 13:35:28 compute-2 ceph-mon[77081]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 13:35:28 compute-2 ceph-mon[77081]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Jan 22 13:35:28 compute-2 ceph-mon[77081]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:28 compute-2 ceph-mon[77081]: mon.compute-2@-1(???) e0 preinit fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:28 compute-2 sudo[76664]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).mds e2 new map
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:35:18.163248+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e32 e32: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 e33: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 3314933000852226048, adjusting msgr requires
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e14: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v65: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e15: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v67: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e16: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e17: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v70: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e18: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e19: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v73: 4 pgs: 4 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e20: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e21: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v76: 67 pgs: 63 unknown, 4 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e22: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v78: 68 pgs: 33 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e23: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e24: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v81: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.2 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.2 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e25: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e26: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v84: 69 pgs: 1 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.3 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.3 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e27: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v86: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.4 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.2 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.2 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.4 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e28: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.6 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.6 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e29: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v89: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.7 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.7 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.8 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.8 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e30: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v91: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.3 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.3 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.b scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.b scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.5 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.5 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.12 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.12 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e31: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v93: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.17 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.17 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.7 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.7 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v94: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e32: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.8 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.8 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v96: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.18 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.18 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.b scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.b scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.19 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.19 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v97: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1b scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1b scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.f scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.f scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1e scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1e scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2012634198' entity='client.admin' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v98: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1f scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 3.1f scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.11 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.11 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.14237 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Saving service ingress.rgw.default spec with placement count:2
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.12 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.12 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v99: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.14 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.14 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1e scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: osdmap e33: 2 total, 2 up, 2 in
Jan 22 13:35:29 compute-2 ceph-mon[77081]: fsmap cephfs:0
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1e scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.14239 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v101: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.6 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.6 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.9 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.9 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.16 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.16 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v102: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1f scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1f scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.17 deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.17 deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.4 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.4 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v103: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.18 scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.18 scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.c scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.c scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1a scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1a scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v104: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.a deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.a deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v105: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Deploying daemon mon.compute-2 on compute-2
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.d deep-scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.d deep-scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 13:35:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2935446327' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 13:35:29 compute-2 ceph-mon[77081]: pgmap v106: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1c scrub starts
Jan 22 13:35:29 compute-2 ceph-mon[77081]: 2.1c scrub ok
Jan 22 13:35:29 compute-2 ceph-mon[77081]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Jan 22 13:35:31 compute-2 ceph-mon[77081]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Jan 22 13:35:31 compute-2 ceph-mon[77081]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 22 13:35:31 compute-2 ceph-mon[77081]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 22 13:35:31 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:31 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 13:35:31 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 13:35:33 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 13:35:34 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 22 13:35:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 22 13:35:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 13:35:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:36 compute-2 ceph-mon[77081]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-22T13:35:26.337912Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 22 13:35:36 compute-2 ceph-mon[77081]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 22 13:35:36 compute-2 ceph-mon[77081]: paxos.1).electionLogic(10) init, last seen epoch 10
Jan 22 13:35:36 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 3.16 scrub starts
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 3.16 scrub ok
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mon.compute-0 calling monitor election
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mon.compute-2 calling monitor election
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 3.1a scrub starts
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 3.1a scrub ok
Jan 22 13:35:41 compute-2 ceph-mon[77081]: pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 2.10 scrub starts
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 2.10 scrub ok
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mon.compute-1 calling monitor election
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 3.15 scrub starts
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 3.15 scrub ok
Jan 22 13:35:41 compute-2 ceph-mon[77081]: pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 2.15 scrub starts
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 2.15 scrub ok
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 2.1b scrub starts
Jan 22 13:35:41 compute-2 ceph-mon[77081]: 2.1b scrub ok
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:41 compute-2 ceph-mon[77081]: pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 13:35:41 compute-2 ceph-mon[77081]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 22 13:35:41 compute-2 ceph-mon[77081]: fsmap cephfs:0
Jan 22 13:35:41 compute-2 ceph-mon[77081]: osdmap e33: 2 total, 2 up, 2 in
Jan 22 13:35:41 compute-2 ceph-mon[77081]: mgrmap e9: compute-0.nyayzk(active, since 2m)
Jan 22 13:35:41 compute-2 ceph-mon[77081]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 13:35:41 compute-2 ceph-mon[77081]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 13:35:41 compute-2 ceph-mon[77081]:     fs cephfs is offline because no MDS is active for it.
Jan 22 13:35:41 compute-2 ceph-mon[77081]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 13:35:41 compute-2 ceph-mon[77081]:     fs cephfs has 0 MDS online, but wants 1
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 13:35:41 compute-2 sudo[77120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:41 compute-2 sudo[77120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:41 compute-2 sudo[77120]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:41 compute-2 sudo[77145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:35:41 compute-2 sudo[77145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:41 compute-2 sudo[77145]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:41 compute-2 sudo[77170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:41 compute-2 sudo[77170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:41 compute-2 sudo[77170]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:41 compute-2 sudo[77195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:41 compute-2 sudo[77195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.203787988 +0000 UTC m=+0.042110506 container create 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 13:35:42 compute-2 systemd[72610]: Starting Mark boot as successful...
Jan 22 13:35:42 compute-2 systemd[72610]: Finished Mark boot as successful.
Jan 22 13:35:42 compute-2 systemd[1]: Started libpod-conmon-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope.
Jan 22 13:35:42 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.260303988 +0000 UTC m=+0.098626526 container init 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.266223612 +0000 UTC m=+0.104546120 container start 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.269794085 +0000 UTC m=+0.108116633 container attach 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 13:35:42 compute-2 youthful_ardinghelli[77275]: 167 167
Jan 22 13:35:42 compute-2 systemd[1]: libpod-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope: Deactivated successfully.
Jan 22 13:35:42 compute-2 conmon[77275]: conmon 3fe5112f81d0e2300ada <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope/container/memory.events
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.273509812 +0000 UTC m=+0.111832330 container died 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.18423759 +0000 UTC m=+0.022560128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:42 compute-2 systemd[1]: var-lib-containers-storage-overlay-7eebce0a94d56e6df0ec4848887b03614267c4d8b406ffe978cca2ec168a88d9-merged.mount: Deactivated successfully.
Jan 22 13:35:42 compute-2 podman[77258]: 2026-01-22 13:35:42.316131671 +0000 UTC m=+0.154454219 container remove 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:35:42 compute-2 systemd[1]: libpod-conmon-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope: Deactivated successfully.
Jan 22 13:35:42 compute-2 systemd[1]: Reloading.
Jan 22 13:35:42 compute-2 systemd-rc-local-generator[77321]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:42 compute-2 systemd-sysv-generator[77324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:42 compute-2 systemd[1]: Reloading.
Jan 22 13:35:42 compute-2 systemd-rc-local-generator[77362]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:42 compute-2 systemd-sysv-generator[77366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:42 compute-2 systemd[1]: Starting Ceph mgr.compute-2.tjdsdx for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:35:43 compute-2 podman[77418]: 2026-01-22 13:35:43.074661092 +0000 UTC m=+0.039503689 container create 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 13:35:43 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:43 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:43 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:43 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/var/lib/ceph/mgr/ceph-compute-2.tjdsdx supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:43 compute-2 podman[77418]: 2026-01-22 13:35:43.14418109 +0000 UTC m=+0.109023707 container init 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:43 compute-2 podman[77418]: 2026-01-22 13:35:43.150121805 +0000 UTC m=+0.114964402 container start 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:43 compute-2 podman[77418]: 2026-01-22 13:35:43.055986626 +0000 UTC m=+0.020829243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:43 compute-2 bash[77418]: 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f
Jan 22 13:35:43 compute-2 systemd[1]: Started Ceph mgr.compute-2.tjdsdx for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:43 compute-2 sudo[77195]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: pidfile_write: ignore empty --pid-file
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'alerts'
Jan 22 13:35:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 13:35:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 13:35:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:43 compute-2 ceph-mon[77081]: Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 13:35:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'balancer'
Jan 22 13:35:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:43.606+0000 7f5297bb2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 13:35:43 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'cephadm'
Jan 22 13:35:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:43.867+0000 7f5297bb2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 13:35:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1019920026 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:44 compute-2 ceph-mon[77081]: pgmap v114: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 13:35:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:44 compute-2 ceph-mon[77081]: Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 13:35:45 compute-2 sudo[77463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:45 compute-2 sudo[77463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:45 compute-2 sudo[77463]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:45 compute-2 sudo[77490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:35:45 compute-2 sudo[77490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:45 compute-2 sudo[77490]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:45 compute-2 ceph-mon[77081]: 3.11 scrub starts
Jan 22 13:35:45 compute-2 ceph-mon[77081]: 3.11 scrub ok
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 13:35:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:45 compute-2 ceph-mon[77081]: Deploying daemon crash.compute-2 on compute-2
Jan 22 13:35:45 compute-2 ceph-mon[77081]: pgmap v115: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:45 compute-2 sudo[77522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:45 compute-2 sudo[77522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:45 compute-2 sudo[77522]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:45 compute-2 sudo[77549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:35:45 compute-2 sudo[77549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:45 compute-2 podman[77615]: 2026-01-22 13:35:45.956788162 +0000 UTC m=+0.040673939 container create ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:35:45 compute-2 systemd[1]: Started libpod-conmon-ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916.scope.
Jan 22 13:35:46 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:46 compute-2 podman[77615]: 2026-01-22 13:35:45.938795294 +0000 UTC m=+0.022680991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:46 compute-2 podman[77615]: 2026-01-22 13:35:46.033435835 +0000 UTC m=+0.117321542 container init ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:35:46 compute-2 podman[77615]: 2026-01-22 13:35:46.048773614 +0000 UTC m=+0.132659301 container start ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:46 compute-2 podman[77615]: 2026-01-22 13:35:46.052874051 +0000 UTC m=+0.136759728 container attach ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:35:46 compute-2 modest_newton[77632]: 167 167
Jan 22 13:35:46 compute-2 systemd[1]: libpod-ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916.scope: Deactivated successfully.
Jan 22 13:35:46 compute-2 podman[77615]: 2026-01-22 13:35:46.056039663 +0000 UTC m=+0.139925340 container died ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:35:46 compute-2 systemd[1]: var-lib-containers-storage-overlay-38c8e96a49854486ebe6ba9a274bca202e83b1bafd2c285b12434ac64efb1189-merged.mount: Deactivated successfully.
Jan 22 13:35:46 compute-2 podman[77615]: 2026-01-22 13:35:46.101832175 +0000 UTC m=+0.185717852 container remove ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 13:35:46 compute-2 systemd[1]: libpod-conmon-ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916.scope: Deactivated successfully.
Jan 22 13:35:46 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'crash'
Jan 22 13:35:46 compute-2 systemd[1]: Reloading.
Jan 22 13:35:46 compute-2 systemd-rc-local-generator[77678]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:46 compute-2 systemd-sysv-generator[77681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:46 compute-2 systemd[1]: Reloading.
Jan 22 13:35:46 compute-2 ceph-mgr[77438]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 13:35:46 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'dashboard'
Jan 22 13:35:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:46.513+0000 7f5297bb2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 13:35:46 compute-2 systemd-rc-local-generator[77715]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:35:46 compute-2 systemd-sysv-generator[77721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:35:48 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'devicehealth'
Jan 22 13:35:48 compute-2 ceph-mgr[77438]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 13:35:48 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 13:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:48.422+0000 7f5297bb2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 13:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 13:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 13:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]:   from numpy import show_config as show_numpy_config
Jan 22 13:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:48.963+0000 7f5297bb2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 13:35:48 compute-2 ceph-mgr[77438]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 13:35:48 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'influx'
Jan 22 13:35:49 compute-2 ceph-mgr[77438]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 13:35:49 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'insights'
Jan 22 13:35:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:49.220+0000 7f5297bb2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 13:35:49 compute-2 ceph-mon[77081]: 2.e scrub starts
Jan 22 13:35:49 compute-2 ceph-mon[77081]: 2.e scrub ok
Jan 22 13:35:49 compute-2 ceph-mon[77081]: 3.14 scrub starts
Jan 22 13:35:49 compute-2 ceph-mon[77081]: 3.14 scrub ok
Jan 22 13:35:49 compute-2 systemd[1]: Starting Ceph crash.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:35:49 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'iostat'
Jan 22 13:35:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1020052989 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:49 compute-2 ceph-mon[77081]: pgmap v116: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:49 compute-2 podman[77776]: 2026-01-22 13:35:49.598552512 +0000 UTC m=+0.025223037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:49 compute-2 ceph-mgr[77438]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 13:35:49 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'k8sevents'
Jan 22 13:35:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:49.743+0000 7f5297bb2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 13:35:49 compute-2 podman[77776]: 2026-01-22 13:35:49.769645712 +0000 UTC m=+0.196316217 container create 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:35:49 compute-2 sshd-session[77726]: Invalid user ubnt from 69.12.83.184 port 33446
Jan 22 13:35:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e34 e34: 2 total, 2 up, 2 in
Jan 22 13:35:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:50 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:51 compute-2 podman[77776]: 2026-01-22 13:35:51.373093582 +0000 UTC m=+1.799764177 container init 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:51 compute-2 podman[77776]: 2026-01-22 13:35:51.383750129 +0000 UTC m=+1.810420654 container start 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 13:35:51 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'localpool'
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 22 13:35:51 compute-2 ceph-mon[77081]: 3.10 scrub starts
Jan 22 13:35:51 compute-2 ceph-mon[77081]: 3.10 scrub ok
Jan 22 13:35:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2143486171' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 13:35:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:51 compute-2 ceph-mon[77081]: 3.f scrub starts
Jan 22 13:35:51 compute-2 ceph-mon[77081]: 3.f scrub ok
Jan 22 13:35:51 compute-2 ceph-mon[77081]: pgmap v117: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2710829164' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 13:35:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:51 compute-2 ceph-mon[77081]: osdmap e34: 2 total, 2 up, 2 in
Jan 22 13:35:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.830+0000 7fd4e3898640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.830+0000 7fd4e3898640 -1 AuthRegistry(0x7fd4dc067150) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.831+0000 7fd4e3898640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.831+0000 7fd4e3898640 -1 AuthRegistry(0x7fd4e3897000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.833+0000 7fd4e0e0c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.834+0000 7fd4e160d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.834+0000 7fd4e1e0e640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.834+0000 7fd4e3898640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 22 13:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 22 13:35:51 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 13:35:52 compute-2 bash[77776]: 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93
Jan 22 13:35:52 compute-2 systemd[1]: Started Ceph crash.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:35:52 compute-2 sshd-session[77809]: Invalid user sol from 45.148.10.240 port 44636
Jan 22 13:35:52 compute-2 sshd-session[77809]: Connection closed by invalid user sol 45.148.10.240 port 44636 [preauth]
Jan 22 13:35:52 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'mirroring'
Jan 22 13:35:52 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'nfs'
Jan 22 13:35:53 compute-2 sudo[77549]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e35 e35: 2 total, 2 up, 2 in
Jan 22 13:35:53 compute-2 sshd-session[77726]: Connection closed by invalid user ubnt 69.12.83.184 port 33446 [preauth]
Jan 22 13:35:53 compute-2 ceph-mgr[77438]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 13:35:53 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'orchestrator'
Jan 22 13:35:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:53.637+0000 7f5297bb2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 13:35:53 compute-2 ceph-mon[77081]: 3.e scrub starts
Jan 22 13:35:53 compute-2 ceph-mon[77081]: 3.e scrub ok
Jan 22 13:35:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/777136089' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 13:35:53 compute-2 ceph-mon[77081]: pgmap v119: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-2 sudo[77811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:54 compute-2 sudo[77811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:54 compute-2 sudo[77811]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:54 compute-2 sudo[77836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:35:54 compute-2 sudo[77836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:54 compute-2 sudo[77836]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:54 compute-2 sudo[77861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:35:54 compute-2 sudo[77861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:54 compute-2 sudo[77861]: pam_unix(sudo:session): session closed for user root
Jan 22 13:35:54 compute-2 sudo[77886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 22 13:35:54 compute-2 sudo[77886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:35:54 compute-2 ceph-mgr[77438]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 13:35:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:54.371+0000 7f5297bb2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 13:35:54 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 13:35:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e36 e36: 2 total, 2 up, 2 in
Jan 22 13:35:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e36 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:54 compute-2 ceph-mgr[77438]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 13:35:54 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'osd_support'
Jan 22 13:35:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:54.662+0000 7f5297bb2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.76915893 +0000 UTC m=+0.084786276 container create 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.70919932 +0000 UTC m=+0.024826626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:54 compute-2 systemd[1]: Started libpod-conmon-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope.
Jan 22 13:35:54 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.880101486 +0000 UTC m=+0.195728832 container init 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.891423251 +0000 UTC m=+0.207050557 container start 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.895505387 +0000 UTC m=+0.211132723 container attach 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 13:35:54 compute-2 vigilant_shtern[77965]: 167 167
Jan 22 13:35:54 compute-2 systemd[1]: libpod-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope: Deactivated successfully.
Jan 22 13:35:54 compute-2 conmon[77965]: conmon 7e4d8b2310bffc106ce1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope/container/memory.events
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.899004618 +0000 UTC m=+0.214631944 container died 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:35:54 compute-2 ceph-mgr[77438]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 13:35:54 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 13:35:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:54.912+0000 7f5297bb2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 13:35:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-785d90322996e245c03947f9d39acce835d63dc7f2cc0f2fa8e00e0da535402b-merged.mount: Deactivated successfully.
Jan 22 13:35:54 compute-2 podman[77949]: 2026-01-22 13:35:54.964609614 +0000 UTC m=+0.280236920 container remove 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 13:35:54 compute-2 systemd[1]: libpod-conmon-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope: Deactivated successfully.
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:54 compute-2 ceph-mon[77081]: osdmap e35: 2 total, 2 up, 2 in
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: pgmap v121: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: 2.19 deep-scrub starts
Jan 22 13:35:54 compute-2 ceph-mon[77081]: 2.19 deep-scrub ok
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:54 compute-2 ceph-mon[77081]: osdmap e36: 2 total, 2 up, 2 in
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:35:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:35:55 compute-2 podman[77988]: 2026-01-22 13:35:55.125494159 +0000 UTC m=+0.043908113 container create 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 13:35:55 compute-2 systemd[1]: Started libpod-conmon-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope.
Jan 22 13:35:55 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:35:55 compute-2 podman[77988]: 2026-01-22 13:35:55.104484453 +0000 UTC m=+0.022898437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:35:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 13:35:55 compute-2 podman[77988]: 2026-01-22 13:35:55.234173706 +0000 UTC m=+0.152587670 container init 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 13:35:55 compute-2 podman[77988]: 2026-01-22 13:35:55.244006312 +0000 UTC m=+0.162420266 container start 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:35:55 compute-2 ceph-mgr[77438]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 13:35:55 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'progress'
Jan 22 13:35:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:55.243+0000 7f5297bb2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 13:35:55 compute-2 podman[77988]: 2026-01-22 13:35:55.248705324 +0000 UTC m=+0.167119278 container attach 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:35:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e37 e37: 2 total, 2 up, 2 in
Jan 22 13:35:55 compute-2 ceph-mgr[77438]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 13:35:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:55.558+0000 7f5297bb2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 13:35:55 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'prometheus'
Jan 22 13:35:56 compute-2 ceph-mon[77081]: pgmap v123: 131 pgs: 2 peering, 62 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 13:35:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:35:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 13:35:56 compute-2 ceph-mon[77081]: osdmap e37: 2 total, 2 up, 2 in
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: --> passed data devices: 0 physical, 1 LVM
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: --> relative data size: 1.0
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3569f689-49d4-4dc0-921b-9570c720a1f3
Jan 22 13:35:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e38 e38: 2 total, 2 up, 2 in
Jan 22 13:35:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"} v 0) v1
Jan 22 13:35:56 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 13:35:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e39 e39: 3 total, 2 up, 3 in
Jan 22 13:35:56 compute-2 ceph-mgr[77438]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 13:35:56 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'rbd_support'
Jan 22 13:35:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:56.734+0000 7f5297bb2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 13:35:56 compute-2 lvm[78052]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:35:56 compute-2 lvm[78052]: VG ceph_vg0 finished
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 13:35:56 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 22 13:35:57 compute-2 ceph-mgr[77438]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'restful'
Jan 22 13:35:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:57.068+0000 7f5297bb2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 13:35:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 22 13:35:57 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2302690487' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 13:35:57 compute-2 mystifying_montalcini[78005]:  stderr: got monmap epoch 3
Jan 22 13:35:57 compute-2 mystifying_montalcini[78005]: --> Creating keyring file for osd.2
Jan 22 13:35:57 compute-2 ceph-mon[77081]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:35:57 compute-2 ceph-mon[77081]: osdmap e38: 2 total, 2 up, 2 in
Jan 22 13:35:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 13:35:57 compute-2 ceph-mon[77081]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 13:35:57 compute-2 ceph-mon[77081]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]': finished
Jan 22 13:35:57 compute-2 ceph-mon[77081]: osdmap e39: 3 total, 2 up, 3 in
Jan 22 13:35:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:35:57 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 22 13:35:57 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 22 13:35:57 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 3569f689-49d4-4dc0-921b-9570c720a1f3 --setuser ceph --setgroup ceph
Jan 22 13:35:57 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'rgw'
Jan 22 13:35:58 compute-2 ceph-mgr[77438]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 13:35:58 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'rook'
Jan 22 13:35:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:58.574+0000 7f5297bb2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 13:35:58 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2302690487' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 13:35:58 compute-2 ceph-mon[77081]: pgmap v127: 146 pgs: 2 peering, 77 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:35:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:35:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e40 e40: 3 total, 2 up, 3 in
Jan 22 13:35:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:35:59 compute-2 ceph-mon[77081]: from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:35:59 compute-2 ceph-mon[77081]: 5.1 deep-scrub starts
Jan 22 13:35:59 compute-2 ceph-mon[77081]: 5.1 deep-scrub ok
Jan 22 13:35:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:35:59 compute-2 ceph-mon[77081]: osdmap e40: 3 total, 2 up, 3 in
Jan 22 13:35:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:35:59 compute-2 ceph-mon[77081]: pgmap v129: 177 pgs: 2 peering, 108 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e41 e41: 3 total, 2 up, 3 in
Jan 22 13:36:01 compute-2 ceph-mon[77081]: from='client.14289 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 22 13:36:01 compute-2 ceph-mon[77081]: 3.c scrub starts
Jan 22 13:36:01 compute-2 ceph-mon[77081]: 3.c scrub ok
Jan 22 13:36:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:01 compute-2 ceph-mon[77081]: osdmap e41: 3 total, 2 up, 3 in
Jan 22 13:36:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:01 compute-2 ceph-mon[77081]: 3.d scrub starts
Jan 22 13:36:01 compute-2 ceph-mon[77081]: 3.d scrub ok
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:01.205+0000 7f5297bb2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'selftest'
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:01.479+0000 7f5297bb2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'snap_schedule'
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]:  stderr: 2026-01-22T13:35:57.336+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]:  stderr: 2026-01-22T13:35:57.336+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]:  stderr: 2026-01-22T13:35:57.336+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]:  stderr: 2026-01-22T13:35:57.337+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'stats'
Jan 22 13:36:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:01.757+0000 7f5297bb2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 22 13:36:01 compute-2 mystifying_montalcini[78005]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 22 13:36:01 compute-2 systemd[1]: libpod-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope: Deactivated successfully.
Jan 22 13:36:01 compute-2 systemd[1]: libpod-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope: Consumed 2.649s CPU time.
Jan 22 13:36:01 compute-2 podman[77988]: 2026-01-22 13:36:01.803865137 +0000 UTC m=+6.722279101 container died 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:36:01 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'status'
Jan 22 13:36:02 compute-2 ceph-mgr[77438]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 13:36:02 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'telegraf'
Jan 22 13:36:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:02.271+0000 7f5297bb2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 13:36:02 compute-2 ceph-mgr[77438]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 13:36:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:02.522+0000 7f5297bb2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 13:36:02 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'telemetry'
Jan 22 13:36:03 compute-2 ceph-mgr[77438]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 13:36:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:03.193+0000 7f5297bb2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 13:36:03 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 13:36:03 compute-2 ceph-mgr[77438]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 13:36:03 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'volumes'
Jan 22 13:36:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:03.904+0000 7f5297bb2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-2 ceph-mon[77081]: 4.1 scrub starts
Jan 22 13:36:04 compute-2 ceph-mon[77081]: 4.1 scrub ok
Jan 22 13:36:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:04 compute-2 ceph-mon[77081]: pgmap v131: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:04 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/265572544' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 22 13:36:04 compute-2 ceph-mgr[77438]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-2 ceph-mgr[77438]: mgr[py] Loading python module 'zabbix'
Jan 22 13:36:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:04.700+0000 7f5297bb2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-2 ceph-mgr[77438]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:04.940+0000 7f5297bb2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 13:36:04 compute-2 ceph-mgr[77438]: ms_deliver_dispatch: unhandled message 0x562f1e9fb600 mon_map magic: 0 v1 from mon.1 v2:192.168.122.102:3300/0
Jan 22 13:36:04 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6-merged.mount: Deactivated successfully.
Jan 22 13:36:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:05 compute-2 sshd-session[71355]: Received disconnect from 38.102.83.41 port 45612:11: disconnected by user
Jan 22 13:36:05 compute-2 sshd-session[71355]: Disconnected from user zuul 38.102.83.41 port 45612
Jan 22 13:36:05 compute-2 sshd-session[71352]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:36:05 compute-2 systemd[1]: session-19.scope: Deactivated successfully.
Jan 22 13:36:05 compute-2 systemd[1]: session-19.scope: Consumed 8.980s CPU time.
Jan 22 13:36:05 compute-2 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Jan 22 13:36:05 compute-2 systemd-logind[787]: Removed session 19.
Jan 22 13:36:05 compute-2 podman[77988]: 2026-01-22 13:36:05.197497783 +0000 UTC m=+10.115911727 container remove 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:36:05 compute-2 systemd[1]: libpod-conmon-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope: Deactivated successfully.
Jan 22 13:36:05 compute-2 sudo[77886]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:05 compute-2 sudo[78987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:05 compute-2 sudo[78987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:05 compute-2 sudo[78987]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:05 compute-2 sudo[79012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:05 compute-2 sudo[79012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:05 compute-2 sudo[79012]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:05 compute-2 ceph-mon[77081]: pgmap v132: 177 pgs: 31 unknown, 146 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:05 compute-2 ceph-mon[77081]: 4.2 scrub starts
Jan 22 13:36:05 compute-2 ceph-mon[77081]: 4.2 scrub ok
Jan 22 13:36:05 compute-2 sudo[79037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:05 compute-2 sudo[79037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:05 compute-2 sudo[79037]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:05 compute-2 sudo[79062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- lvm list --format json
Jan 22 13:36:05 compute-2 sudo[79062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:05 compute-2 podman[79125]: 2026-01-22 13:36:05.833651401 +0000 UTC m=+0.024165720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:05 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:06 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:36:07 compute-2 podman[79125]: 2026-01-22 13:36:07.443617539 +0000 UTC m=+1.634131858 container create fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 13:36:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e42 e42: 3 total, 2 up, 3 in
Jan 22 13:36:07 compute-2 systemd[1]: Started libpod-conmon-fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c.scope.
Jan 22 13:36:07 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:07 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/459129720' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 22 13:36:07 compute-2 ceph-mon[77081]: pgmap v133: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 13:36:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:07 compute-2 ceph-mon[77081]: Standby manager daemon compute-2.tjdsdx started
Jan 22 13:36:07 compute-2 podman[79125]: 2026-01-22 13:36:07.575068058 +0000 UTC m=+1.765582387 container init fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 13:36:07 compute-2 podman[79125]: 2026-01-22 13:36:07.587379518 +0000 UTC m=+1.777893817 container start fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 13:36:07 compute-2 podman[79125]: 2026-01-22 13:36:07.592954343 +0000 UTC m=+1.783468642 container attach fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 13:36:07 compute-2 ecstatic_bohr[79141]: 167 167
Jan 22 13:36:07 compute-2 systemd[1]: libpod-fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c.scope: Deactivated successfully.
Jan 22 13:36:07 compute-2 podman[79125]: 2026-01-22 13:36:07.597911502 +0000 UTC m=+1.788425801 container died fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:36:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-92d4f3fa2503645788d8324f74855da5b2e15f6d7de7371668c4420bc6df12fb-merged.mount: Deactivated successfully.
Jan 22 13:36:07 compute-2 podman[79125]: 2026-01-22 13:36:07.647220945 +0000 UTC m=+1.837735244 container remove fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 13:36:07 compute-2 systemd[1]: libpod-conmon-fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c.scope: Deactivated successfully.
Jan 22 13:36:07 compute-2 podman[79164]: 2026-01-22 13:36:07.851711774 +0000 UTC m=+0.074734125 container create 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 13:36:07 compute-2 systemd[1]: Started libpod-conmon-06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910.scope.
Jan 22 13:36:07 compute-2 podman[79164]: 2026-01-22 13:36:07.809592989 +0000 UTC m=+0.032615370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:07 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:07 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:10 compute-2 podman[79164]: 2026-01-22 13:36:10.892179814 +0000 UTC m=+3.115202185 container init 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 13:36:10 compute-2 podman[79164]: 2026-01-22 13:36:10.905463456 +0000 UTC m=+3.128485837 container start 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:36:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:11 compute-2 podman[79164]: 2026-01-22 13:36:11.176085571 +0000 UTC m=+3.399107972 container attach 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:11 compute-2 eager_saha[79181]: {
Jan 22 13:36:11 compute-2 eager_saha[79181]:     "2": [
Jan 22 13:36:11 compute-2 eager_saha[79181]:         {
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "devices": [
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "/dev/loop3"
Jan 22 13:36:11 compute-2 eager_saha[79181]:             ],
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "lv_name": "ceph_lv0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "lv_size": "7511998464",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=jEocwv-ccRD-GQ8s-06tX-i7z2-rzc0-cFSAk3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3569f689-49d4-4dc0-921b-9570c720a1f3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "lv_uuid": "jEocwv-ccRD-GQ8s-06tX-i7z2-rzc0-cFSAk3",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "name": "ceph_lv0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "tags": {
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.block_uuid": "jEocwv-ccRD-GQ8s-06tX-i7z2-rzc0-cFSAk3",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.cephx_lockbox_secret": "",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.cluster_name": "ceph",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.crush_device_class": "",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.encrypted": "0",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.osd_fsid": "3569f689-49d4-4dc0-921b-9570c720a1f3",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.osd_id": "2",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.type": "block",
Jan 22 13:36:11 compute-2 eager_saha[79181]:                 "ceph.vdo": "0"
Jan 22 13:36:11 compute-2 eager_saha[79181]:             },
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "type": "block",
Jan 22 13:36:11 compute-2 eager_saha[79181]:             "vg_name": "ceph_vg0"
Jan 22 13:36:11 compute-2 eager_saha[79181]:         }
Jan 22 13:36:11 compute-2 eager_saha[79181]:     ]
Jan 22 13:36:11 compute-2 eager_saha[79181]: }
Jan 22 13:36:11 compute-2 systemd[1]: libpod-06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910.scope: Deactivated successfully.
Jan 22 13:36:11 compute-2 podman[79164]: 2026-01-22 13:36:11.706580086 +0000 UTC m=+3.929602437 container died 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 e43: 3 total, 2 up, 3 in
Jan 22 13:36:11 compute-2 ceph-mon[77081]: 3.5 scrub starts
Jan 22 13:36:11 compute-2 ceph-mon[77081]: 3.5 scrub ok
Jan 22 13:36:11 compute-2 ceph-mon[77081]: 5.2 deep-scrub starts
Jan 22 13:36:11 compute-2 ceph-mon[77081]: 5.2 deep-scrub ok
Jan 22 13:36:11 compute-2 ceph-mon[77081]: pgmap v134: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/647988089' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:11 compute-2 ceph-mon[77081]: osdmap e42: 3 total, 2 up, 3 in
Jan 22 13:36:11 compute-2 ceph-mon[77081]: mgrmap e10: compute-0.nyayzk(active, since 3m), standbys: compute-2.tjdsdx
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.tjdsdx", "id": "compute-2.tjdsdx"}]: dispatch
Jan 22 13:36:11 compute-2 ceph-mon[77081]: Standby manager daemon compute-1.hzmatt started
Jan 22 13:36:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8-merged.mount: Deactivated successfully.
Jan 22 13:36:11 compute-2 podman[79164]: 2026-01-22 13:36:11.863820229 +0000 UTC m=+4.086842570 container remove 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:36:11 compute-2 systemd[1]: libpod-conmon-06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910.scope: Deactivated successfully.
Jan 22 13:36:11 compute-2 sudo[79062]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:11 compute-2 sudo[79202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:11 compute-2 sudo[79202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:11 compute-2 sudo[79202]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:12 compute-2 sudo[79227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:12 compute-2 sudo[79227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:12 compute-2 sudo[79227]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:12 compute-2 sudo[79252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:12 compute-2 sudo[79252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:12 compute-2 sudo[79252]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:12 compute-2 sudo[79277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:36:12 compute-2 sudo[79277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.556592782 +0000 UTC m=+0.045387103 container create 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 13:36:12 compute-2 systemd[1]: Started libpod-conmon-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope.
Jan 22 13:36:12 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.537138276 +0000 UTC m=+0.025932627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.635709206 +0000 UTC m=+0.124503527 container init 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.643096882 +0000 UTC m=+0.131891203 container start 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.64682556 +0000 UTC m=+0.135619881 container attach 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:36:12 compute-2 crazy_diffie[79357]: 167 167
Jan 22 13:36:12 compute-2 systemd[1]: libpod-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope: Deactivated successfully.
Jan 22 13:36:12 compute-2 conmon[79357]: conmon 69294cacae79399a349d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope/container/memory.events
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.649005858 +0000 UTC m=+0.137800179 container died 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:36:12 compute-2 systemd[1]: var-lib-containers-storage-overlay-e6371cccffdc24f50375a2d375fe09c5e3fd4ae0a7c9f135f0930506b265e20b-merged.mount: Deactivated successfully.
Jan 22 13:36:12 compute-2 podman[79342]: 2026-01-22 13:36:12.688628577 +0000 UTC m=+0.177422898 container remove 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 13:36:12 compute-2 systemd[1]: libpod-conmon-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope: Deactivated successfully.
Jan 22 13:36:13 compute-2 podman[79391]: 2026-01-22 13:36:13.623194601 +0000 UTC m=+0.024312494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:13 compute-2 ceph-mon[77081]: 4.3 deep-scrub starts
Jan 22 13:36:13 compute-2 ceph-mon[77081]: 4.3 deep-scrub ok
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/502293407' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 22 13:36:13 compute-2 ceph-mon[77081]: pgmap v136: 177 pgs: 38 peering, 139 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:13 compute-2 ceph-mon[77081]: 3.13 scrub starts
Jan 22 13:36:13 compute-2 ceph-mon[77081]: 3.13 scrub ok
Jan 22 13:36:13 compute-2 ceph-mon[77081]: pgmap v137: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:36:13 compute-2 ceph-mon[77081]: osdmap e43: 3 total, 2 up, 3 in
Jan 22 13:36:13 compute-2 ceph-mon[77081]: mgrmap e11: compute-0.nyayzk(active, since 3m), standbys: compute-2.tjdsdx, compute-1.hzmatt
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hzmatt", "id": "compute-1.hzmatt"}]: dispatch
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 13:36:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:13 compute-2 ceph-mon[77081]: Deploying daemon osd.2 on compute-2
Jan 22 13:36:13 compute-2 podman[79391]: 2026-01-22 13:36:13.871663599 +0000 UTC m=+0.272781482 container create 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:14 compute-2 systemd[1]: Started libpod-conmon-602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd.scope.
Jan 22 13:36:14 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:14 compute-2 podman[79391]: 2026-01-22 13:36:14.490604556 +0000 UTC m=+0.891722459 container init 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 13:36:14 compute-2 podman[79391]: 2026-01-22 13:36:14.49907106 +0000 UTC m=+0.900188943 container start 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:36:14 compute-2 podman[79391]: 2026-01-22 13:36:14.657731881 +0000 UTC m=+1.058849894 container attach 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:36:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test[79408]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 22 13:36:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test[79408]:                             [--no-systemd] [--no-tmpfs]
Jan 22 13:36:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test[79408]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 22 13:36:15 compute-2 podman[79391]: 2026-01-22 13:36:15.178225201 +0000 UTC m=+1.579343084 container died 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:15 compute-2 systemd[1]: libpod-602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd.scope: Deactivated successfully.
Jan 22 13:36:15 compute-2 ceph-mon[77081]: 3.9 scrub starts
Jan 22 13:36:15 compute-2 ceph-mon[77081]: 3.9 scrub ok
Jan 22 13:36:15 compute-2 ceph-mon[77081]: pgmap v139: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:16 compute-2 systemd[1]: var-lib-containers-storage-overlay-657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea-merged.mount: Deactivated successfully.
Jan 22 13:36:16 compute-2 podman[79391]: 2026-01-22 13:36:16.998202247 +0000 UTC m=+3.399320170 container remove 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:36:17 compute-2 ceph-mon[77081]: 5.3 scrub starts
Jan 22 13:36:17 compute-2 ceph-mon[77081]: 5.3 scrub ok
Jan 22 13:36:17 compute-2 ceph-mon[77081]: pgmap v140: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:17 compute-2 systemd[1]: libpod-conmon-602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd.scope: Deactivated successfully.
Jan 22 13:36:17 compute-2 systemd[1]: Reloading.
Jan 22 13:36:17 compute-2 systemd-rc-local-generator[79472]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:17 compute-2 systemd-sysv-generator[79476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:18 compute-2 systemd[1]: Reloading.
Jan 22 13:36:18 compute-2 systemd-rc-local-generator[79514]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:18 compute-2 systemd-sysv-generator[79518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:18 compute-2 systemd[1]: Starting Ceph osd.2 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:36:18 compute-2 ceph-mon[77081]: pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:18 compute-2 podman[79571]: 2026-01-22 13:36:18.605062879 +0000 UTC m=+0.033276802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:18 compute-2 podman[79571]: 2026-01-22 13:36:18.796340904 +0000 UTC m=+0.224554797 container create 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 13:36:18 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:18 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:19 compute-2 podman[79571]: 2026-01-22 13:36:19.000703944 +0000 UTC m=+0.428917857 container init 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:36:19 compute-2 podman[79571]: 2026-01-22 13:36:19.007620937 +0000 UTC m=+0.435834830 container start 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 13:36:19 compute-2 podman[79571]: 2026-01-22 13:36:19.045691505 +0000 UTC m=+0.473905398 container attach 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 13:36:20 compute-2 bash[79571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:36:20 compute-2 bash[79571]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:36:20 compute-2 bash[79571]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:36:20 compute-2 bash[79571]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:20 compute-2 bash[79571]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 13:36:20 compute-2 bash[79571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 13:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: --> ceph-volume raw activate successful for osd ID: 2
Jan 22 13:36:20 compute-2 bash[79571]: --> ceph-volume raw activate successful for osd ID: 2
Jan 22 13:36:20 compute-2 systemd[1]: libpod-8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f.scope: Deactivated successfully.
Jan 22 13:36:20 compute-2 systemd[1]: libpod-8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f.scope: Consumed 1.265s CPU time.
Jan 22 13:36:20 compute-2 podman[79699]: 2026-01-22 13:36:20.306109566 +0000 UTC m=+0.037660338 container died 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:23 compute-2 ceph-mon[77081]: 3.a deep-scrub starts
Jan 22 13:36:23 compute-2 ceph-mon[77081]: 3.a deep-scrub ok
Jan 22 13:36:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402-merged.mount: Deactivated successfully.
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 4.4 scrub starts
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 4.4 scrub ok
Jan 22 13:36:26 compute-2 ceph-mon[77081]: pgmap v142: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 3.1d scrub starts
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 3.1d scrub ok
Jan 22 13:36:26 compute-2 ceph-mon[77081]: pgmap v143: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 3.1c scrub starts
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 3.1c scrub ok
Jan 22 13:36:26 compute-2 ceph-mon[77081]: pgmap v144: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 7.1 scrub starts
Jan 22 13:36:26 compute-2 ceph-mon[77081]: 7.1 scrub ok
Jan 22 13:36:27 compute-2 podman[79699]: 2026-01-22 13:36:27.243977621 +0000 UTC m=+6.975528413 container remove 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 13:36:27 compute-2 podman[79759]: 2026-01-22 13:36:27.452892992 +0000 UTC m=+0.024889260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:27 compute-2 podman[79759]: 2026-01-22 13:36:27.721066092 +0000 UTC m=+0.293062360 container create 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:28 compute-2 ceph-mon[77081]: 5.5 scrub starts
Jan 22 13:36:28 compute-2 ceph-mon[77081]: 5.5 scrub ok
Jan 22 13:36:28 compute-2 ceph-mon[77081]: pgmap v145: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:28 compute-2 ceph-mon[77081]: 4.6 scrub starts
Jan 22 13:36:28 compute-2 ceph-mon[77081]: 4.6 scrub ok
Jan 22 13:36:28 compute-2 ceph-mon[77081]: 7.5 scrub starts
Jan 22 13:36:28 compute-2 ceph-mon[77081]: 7.5 scrub ok
Jan 22 13:36:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:28 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:28 compute-2 podman[79759]: 2026-01-22 13:36:28.83628987 +0000 UTC m=+1.408286118 container init 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 13:36:28 compute-2 podman[79759]: 2026-01-22 13:36:28.843373387 +0000 UTC m=+1.415369625 container start 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 13:36:29 compute-2 ceph-osd[79779]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:29 compute-2 ceph-osd[79779]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 22 13:36:29 compute-2 ceph-osd[79779]: pidfile_write: ignore empty --pid-file
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 13:36:29 compute-2 bash[79759]: 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d
Jan 22 13:36:29 compute-2 systemd[1]: Started Ceph osd.2 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:36:29 compute-2 ceph-mon[77081]: 7.7 deep-scrub starts
Jan 22 13:36:29 compute-2 ceph-mon[77081]: 7.7 deep-scrub ok
Jan 22 13:36:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:29 compute-2 ceph-mon[77081]: pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:29 compute-2 sudo[79277]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 13:36:29 compute-2 ceph-osd[79779]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 22 13:36:29 compute-2 ceph-osd[79779]: load: jerasure load: lrc 
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:36:29 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 13:36:30 compute-2 ceph-osd[79779]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 22 13:36:30 compute-2 ceph-osd[79779]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs mount
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs mount shared_bdev_used = 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: RocksDB version: 7.9.2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Git sha 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: DB SUMMARY
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: DB Session ID:  HGFKAE26TIJZ4TV8SS1B
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: CURRENT file:  CURRENT
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.error_if_exists: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.create_if_missing: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                     Options.env: 0x557359bc7f10
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                Options.info_log: 0x557358daeca0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                              Options.statistics: (nil)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.use_fsync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                              Options.db_log_dir: 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.write_buffer_manager: 0x557359cc8460
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.unordered_write: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.row_cache: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                              Options.wal_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.two_write_queues: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.wal_compression: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.atomic_flush: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_background_jobs: 4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_background_compactions: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_subcompactions: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.max_open_files: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Compression algorithms supported:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kZSTD supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kXpressCompression supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kBZip2Compression supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kLZ4Compression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kZlibCompression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kLZ4HCCompression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kSnappyCompression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae6c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da4430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11973bfc-0335-469d-b17c-3e572773de22
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990413810, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990414214, "job": 1, "event": "recovery_finished"}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: freelist init
Jan 22 13:36:30 compute-2 ceph-osd[79779]: freelist _read_cfg
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs umount
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs mount
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluefs mount shared_bdev_used = 4718592
Jan 22 13:36:30 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: RocksDB version: 7.9.2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Git sha 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: DB SUMMARY
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: DB Session ID:  HGFKAE26TIJZ4TV8SS1A
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: CURRENT file:  CURRENT
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.error_if_exists: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.create_if_missing: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                     Options.env: 0x557358ef64d0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                Options.info_log: 0x557358daf980
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                              Options.statistics: (nil)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.use_fsync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                              Options.db_log_dir: 
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.write_buffer_manager: 0x557359cc8460
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.unordered_write: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.row_cache: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                              Options.wal_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.two_write_queues: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.wal_compression: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.atomic_flush: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_background_jobs: 4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_background_compactions: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_subcompactions: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.max_open_files: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Compression algorithms supported:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kZSTD supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kXpressCompression supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kBZip2Compression supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kLZ4Compression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kZlibCompression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kLZ4HCCompression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         kSnappyCompression supported: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db80a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da54b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db80a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da54b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db80a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557358da54b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11973bfc-0335-469d-b17c-3e572773de22
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990682556, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990827756, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088990, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11973bfc-0335-469d-b17c-3e572773de22", "db_session_id": "HGFKAE26TIJZ4TV8SS1A", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990872099, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088990, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11973bfc-0335-469d-b17c-3e572773de22", "db_session_id": "HGFKAE26TIJZ4TV8SS1A", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990901660, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088990, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11973bfc-0335-469d-b17c-3e572773de22", "db_session_id": "HGFKAE26TIJZ4TV8SS1A", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990903342, "job": 1, "event": "recovery_finished"}
Jan 22 13:36:30 compute-2 ceph-osd[79779]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 22 13:36:31 compute-2 ceph-mon[77081]: 7.c scrub starts
Jan 22 13:36:31 compute-2 ceph-mon[77081]: 7.c scrub ok
Jan 22 13:36:31 compute-2 ceph-mon[77081]: pgmap v147: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557358e77c00
Jan 22 13:36:31 compute-2 ceph-osd[79779]: rocksdb: DB pointer 0x557359cb3a00
Jan 22 13:36:31 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 13:36:31 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 22 13:36:31 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 22 13:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 13:36:31 compute-2 ceph-osd[79779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 22 13:36:31 compute-2 ceph-osd[79779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 22 13:36:31 compute-2 ceph-osd[79779]: _get_class not permitted to load lua
Jan 22 13:36:31 compute-2 ceph-osd[79779]: _get_class not permitted to load sdk
Jan 22 13:36:31 compute-2 ceph-osd[79779]: _get_class not permitted to load test_remote_reads
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 load_pgs
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 load_pgs opened 0 pgs
Jan 22 13:36:31 compute-2 ceph-osd[79779]: osd.2 0 log_to_monitors true
Jan 22 13:36:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:36:31.215+0000 7f4800129740 -1 osd.2 0 log_to_monitors true
Jan 22 13:36:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 22 13:36:31 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 13:36:31 compute-2 sudo[80212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:31 compute-2 sudo[80212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:31 compute-2 sudo[80212]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:31 compute-2 sudo[80237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:31 compute-2 sudo[80237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:31 compute-2 sudo[80237]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:31 compute-2 sudo[80262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:31 compute-2 sudo[80262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:31 compute-2 sudo[80262]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:31 compute-2 sudo[80287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- raw list --format json
Jan 22 13:36:31 compute-2 sudo[80287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:31 compute-2 podman[80350]: 2026-01-22 13:36:31.838608097 +0000 UTC m=+0.049194754 container create c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 13:36:31 compute-2 podman[80350]: 2026-01-22 13:36:31.814776396 +0000 UTC m=+0.025363073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:31 compute-2 systemd[1]: Started libpod-conmon-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope.
Jan 22 13:36:31 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:32 compute-2 podman[80350]: 2026-01-22 13:36:32.064650002 +0000 UTC m=+0.275236689 container init c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:36:32 compute-2 podman[80350]: 2026-01-22 13:36:32.074453202 +0000 UTC m=+0.285039859 container start c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 13:36:32 compute-2 systemd[1]: libpod-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope: Deactivated successfully.
Jan 22 13:36:32 compute-2 podman[80350]: 2026-01-22 13:36:32.082356001 +0000 UTC m=+0.292942758 container attach c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 13:36:32 compute-2 suspicious_feynman[80366]: 167 167
Jan 22 13:36:32 compute-2 conmon[80366]: conmon c689d4272486e10734d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope/container/memory.events
Jan 22 13:36:32 compute-2 podman[80350]: 2026-01-22 13:36:32.084486237 +0000 UTC m=+0.295072934 container died c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:36:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-12558faf242503d40865f0d494e37a99083fe0711a9e6f4fa9cf4dc7c3621013-merged.mount: Deactivated successfully.
Jan 22 13:36:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 22 13:36:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 22 13:36:32 compute-2 podman[80350]: 2026-01-22 13:36:32.211752526 +0000 UTC m=+0.422339213 container remove c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:32 compute-2 systemd[1]: libpod-conmon-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope: Deactivated successfully.
Jan 22 13:36:32 compute-2 podman[80393]: 2026-01-22 13:36:32.344023758 +0000 UTC m=+0.019059035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:32 compute-2 podman[80393]: 2026-01-22 13:36:32.473742793 +0000 UTC m=+0.148778050 container create e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 13:36:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e44 e44: 3 total, 2 up, 3 in
Jan 22 13:36:32 compute-2 ceph-mon[77081]: 5.6 deep-scrub starts
Jan 22 13:36:32 compute-2 ceph-mon[77081]: 5.6 deep-scrub ok
Jan 22 13:36:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:32 compute-2 ceph-mon[77081]: 7.d scrub starts
Jan 22 13:36:32 compute-2 ceph-mon[77081]: 7.d scrub ok
Jan 22 13:36:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:32 compute-2 ceph-mon[77081]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 13:36:32 compute-2 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 13:36:32 compute-2 ceph-mon[77081]: pgmap v148: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:32 compute-2 ceph-mon[77081]: 7.11 scrub starts
Jan 22 13:36:32 compute-2 ceph-mon[77081]: 7.11 scrub ok
Jan 22 13:36:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 22 13:36:32 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 13:36:32 compute-2 systemd[1]: Started libpod-conmon-e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b.scope.
Jan 22 13:36:32 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:32 compute-2 podman[80393]: 2026-01-22 13:36:32.662957412 +0000 UTC m=+0.337992709 container init e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 13:36:32 compute-2 podman[80393]: 2026-01-22 13:36:32.669601308 +0000 UTC m=+0.344636605 container start e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 13:36:32 compute-2 podman[80393]: 2026-01-22 13:36:32.862864485 +0000 UTC m=+0.537899792 container attach e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]: {
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:     "3569f689-49d4-4dc0-921b-9570c720a1f3": {
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:         "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:         "osd_id": 2,
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:         "osd_uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3",
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:         "type": "bluestore"
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]:     }
Jan 22 13:36:33 compute-2 heuristic_swartz[80409]: }
Jan 22 13:36:33 compute-2 systemd[1]: libpod-e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b.scope: Deactivated successfully.
Jan 22 13:36:33 compute-2 podman[80393]: 2026-01-22 13:36:33.562010206 +0000 UTC m=+1.237045453 container died e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762-merged.mount: Deactivated successfully.
Jan 22 13:36:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e45 e45: 3 total, 2 up, 3 in
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0 done with init, starting boot process
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0 start_boot
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 22 13:36:33 compute-2 ceph-osd[79779]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 22 13:36:33 compute-2 ceph-mon[77081]: 4.7 deep-scrub starts
Jan 22 13:36:33 compute-2 ceph-mon[77081]: 4.7 deep-scrub ok
Jan 22 13:36:33 compute-2 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 13:36:33 compute-2 ceph-mon[77081]: osdmap e44: 3 total, 2 up, 3 in
Jan 22 13:36:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:33 compute-2 ceph-mon[77081]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 13:36:33 compute-2 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 13:36:33 compute-2 ceph-mon[77081]: pgmap v150: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:33 compute-2 podman[80393]: 2026-01-22 13:36:33.965532289 +0000 UTC m=+1.640567556 container remove e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:36:33 compute-2 systemd[1]: libpod-conmon-e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b.scope: Deactivated successfully.
Jan 22 13:36:33 compute-2 sudo[80287]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:34 compute-2 sudo[80444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:34 compute-2 sudo[80444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:34 compute-2 sudo[80444]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:34 compute-2 sudo[80469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:34 compute-2 sudo[80469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:34 compute-2 sudo[80469]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:34 compute-2 sudo[80494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:34 compute-2 sudo[80494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:34 compute-2 sudo[80494]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:34 compute-2 sudo[80519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:36:34 compute-2 sudo[80519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:35 compute-2 podman[80585]: 2026-01-22 13:36:34.970687152 +0000 UTC m=+0.023645507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:35 compute-2 podman[80585]: 2026-01-22 13:36:35.092562299 +0000 UTC m=+0.145520634 container create 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:36:35 compute-2 ceph-mon[77081]: purged_snaps scrub starts
Jan 22 13:36:35 compute-2 ceph-mon[77081]: purged_snaps scrub ok
Jan 22 13:36:35 compute-2 ceph-mon[77081]: 5.8 scrub starts
Jan 22 13:36:35 compute-2 ceph-mon[77081]: 5.8 scrub ok
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 13:36:35 compute-2 ceph-mon[77081]: osdmap e45: 3 total, 2 up, 3 in
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:35 compute-2 ceph-mon[77081]: 7.12 scrub starts
Jan 22 13:36:35 compute-2 ceph-mon[77081]: 7.12 scrub ok
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:35 compute-2 ceph-mon[77081]: Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 13:36:35 compute-2 systemd[1]: Started libpod-conmon-1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31.scope.
Jan 22 13:36:35 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:35 compute-2 podman[80585]: 2026-01-22 13:36:35.629626567 +0000 UTC m=+0.682584922 container init 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 13:36:35 compute-2 podman[80585]: 2026-01-22 13:36:35.63617112 +0000 UTC m=+0.689129455 container start 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 13:36:35 compute-2 inspiring_chatterjee[80601]: 167 167
Jan 22 13:36:35 compute-2 systemd[1]: libpod-1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31.scope: Deactivated successfully.
Jan 22 13:36:35 compute-2 podman[80585]: 2026-01-22 13:36:35.803338196 +0000 UTC m=+0.856296531 container attach 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 13:36:35 compute-2 podman[80585]: 2026-01-22 13:36:35.805506454 +0000 UTC m=+0.858464819 container died 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:35 compute-2 systemd[1]: var-lib-containers-storage-overlay-fe55f46ab7dbd8482d9592c955d295a9533bb2328b472113e026868a56d68a31-merged.mount: Deactivated successfully.
Jan 22 13:36:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:36 compute-2 ceph-mon[77081]: pgmap v152: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:36 compute-2 podman[80585]: 2026-01-22 13:36:36.427695417 +0000 UTC m=+1.480653752 container remove 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 13:36:36 compute-2 systemd[1]: libpod-conmon-1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31.scope: Deactivated successfully.
Jan 22 13:36:36 compute-2 systemd[1]: Reloading.
Jan 22 13:36:36 compute-2 systemd-rc-local-generator[80646]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:36 compute-2 systemd-sysv-generator[80651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:37 compute-2 systemd[1]: Reloading.
Jan 22 13:36:37 compute-2 systemd-rc-local-generator[80688]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:37 compute-2 systemd-sysv-generator[80691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:37 compute-2 systemd[1]: Starting Ceph rgw.rgw.compute-2.gfsxzw for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:36:37 compute-2 podman[80750]: 2026-01-22 13:36:37.892035877 +0000 UTC m=+0.072831199 container create 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:36:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.gfsxzw supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:37 compute-2 podman[80750]: 2026-01-22 13:36:37.85852452 +0000 UTC m=+0.039319872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:37 compute-2 podman[80750]: 2026-01-22 13:36:37.973127174 +0000 UTC m=+0.153922576 container init 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:36:37 compute-2 podman[80750]: 2026-01-22 13:36:37.981707381 +0000 UTC m=+0.162502733 container start 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:36:37 compute-2 bash[80750]: 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0
Jan 22 13:36:37 compute-2 systemd[1]: Started Ceph rgw.rgw.compute-2.gfsxzw for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:36:38 compute-2 radosgw[80769]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:38 compute-2 radosgw[80769]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 22 13:36:38 compute-2 radosgw[80769]: framework: beast
Jan 22 13:36:38 compute-2 radosgw[80769]: framework conf key: endpoint, val: 192.168.122.102:8082
Jan 22 13:36:38 compute-2 radosgw[80769]: init_numa not setting numa affinity
Jan 22 13:36:38 compute-2 sudo[80519]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:38 compute-2 ceph-mon[77081]: 4.b deep-scrub starts
Jan 22 13:36:38 compute-2 ceph-mon[77081]: 4.b deep-scrub ok
Jan 22 13:36:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:38 compute-2 ceph-mon[77081]: pgmap v153: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e46 e46: 3 total, 2 up, 3 in
Jan 22 13:36:40 compute-2 ceph-mon[77081]: 7.15 scrub starts
Jan 22 13:36:40 compute-2 ceph-mon[77081]: 7.15 scrub ok
Jan 22 13:36:40 compute-2 ceph-mon[77081]: 5.a scrub starts
Jan 22 13:36:40 compute-2 ceph-mon[77081]: 5.a scrub ok
Jan 22 13:36:40 compute-2 ceph-mon[77081]: 7.17 scrub starts
Jan 22 13:36:40 compute-2 ceph-mon[77081]: 7.17 scrub ok
Jan 22 13:36:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 22 13:36:40 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 13:36:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e47 e47: 3 total, 2 up, 3 in
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.851 iops: 4825.905 elapsed_sec: 0.622
Jan 22 13:36:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 0 waiting for initial osdmap
Jan 22 13:36:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:36:42.420+0000 7f47fc0a9640 -1 osd.2 0 waiting for initial osdmap
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 40 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 40 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 40 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 40 check_osdmap_features require_osd_release unknown -> reef
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 47 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 47 set_numa_affinity not setting numa affinity
Jan 22 13:36:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:36:42.464+0000 7f47f76d1640 -1 osd.2 47 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 13:36:42 compute-2 ceph-osd[79779]: osd.2 47 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 22 13:36:42 compute-2 ceph-mon[77081]: 5.c scrub starts
Jan 22 13:36:42 compute-2 ceph-mon[77081]: 5.c scrub ok
Jan 22 13:36:42 compute-2 ceph-mon[77081]: pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: 7.19 scrub starts
Jan 22 13:36:42 compute-2 ceph-mon[77081]: 7.19 scrub ok
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:42 compute-2 ceph-mon[77081]: 7.1a deep-scrub starts
Jan 22 13:36:42 compute-2 ceph-mon[77081]: 7.1a deep-scrub ok
Jan 22 13:36:42 compute-2 ceph-mon[77081]: osdmap e46: 3 total, 2 up, 3 in
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:42 compute-2 ceph-mon[77081]: Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 13:36:42 compute-2 ceph-mon[77081]: pgmap v156: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e48 e48: 3 total, 2 up, 3 in
Jan 22 13:36:43 compute-2 ceph-osd[79779]: osd.2 47 tick checking mon for new map
Jan 22 13:36:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 13:36:43 compute-2 ceph-mon[77081]: osdmap e47: 3 total, 2 up, 3 in
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:43 compute-2 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:36:43 compute-2 ceph-mon[77081]: OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-2 ceph-mon[77081]: osdmap e48: 3 total, 2 up, 3 in
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:43 compute-2 ceph-mon[77081]: Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 13:36:43 compute-2 ceph-mon[77081]: pgmap v159: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 22 13:36:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 22 13:36:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 22 13:36:44 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 49 state: booting -> active
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.1d( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.1b( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[7.1d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.13( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.15( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.10( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.c( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.d( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.a( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.2( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.6( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.3( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[3.8( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=49) [2] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.1c( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[3.1b( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.19( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.14( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[7.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:45 compute-2 ceph-mon[77081]: osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328] boot
Jan 22 13:36:45 compute-2 ceph-mon[77081]: osdmap e49: 3 total, 3 up, 3 in
Jan 22 13:36:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 22 13:36:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 13:36:45 compute-2 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:36:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.15( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.12( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.9( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1a( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.15( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.16( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1f( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.8( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.9( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1d( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.15( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.0( empty local-lis/les=49/50 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.d( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.10( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.b( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.2( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.0( empty local-lis/les=49/50 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.3( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.6( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[6.1( empty local-lis/les=49/50 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=49) [2] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.8( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1c( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.19( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1b( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.14( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.8( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.c( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.12( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.9( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.15( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1f( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.8( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1a( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.9( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:36:45 compute-2 sudo[80832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:45 compute-2 sudo[80832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:45 compute-2 sudo[80832]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:46 compute-2 sudo[80857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:36:46 compute-2 sudo[80857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:46 compute-2 sudo[80857]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:46 compute-2 sudo[80882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:36:46 compute-2 sudo[80882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:46 compute-2 ceph-mon[77081]: pgmap v161: 179 pgs: 1 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 13:36:46 compute-2 ceph-mon[77081]: osdmap e50: 3 total, 3 up, 3 in
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-2 ceph-mon[77081]: 4.f scrub starts
Jan 22 13:36:46 compute-2 ceph-mon[77081]: 4.f scrub ok
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 13:36:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:46 compute-2 sudo[80882]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:46 compute-2 sudo[80907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:36:46 compute-2 sudo[80907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:36:46 compute-2 podman[80972]: 2026-01-22 13:36:46.577392456 +0000 UTC m=+0.022744183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:47 compute-2 podman[80972]: 2026-01-22 13:36:47.53990238 +0000 UTC m=+0.985254087 container create d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:36:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 22 13:36:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 13:36:48 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:48 compute-2 systemd[1]: Started libpod-conmon-d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823.scope.
Jan 22 13:36:48 compute-2 ceph-mon[77081]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 13:36:48 compute-2 ceph-mon[77081]: 7.1c scrub starts
Jan 22 13:36:48 compute-2 ceph-mon[77081]: 7.1c scrub ok
Jan 22 13:36:48 compute-2 ceph-mon[77081]: Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 13:36:48 compute-2 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:36:48 compute-2 ceph-mon[77081]: 4.c scrub starts
Jan 22 13:36:48 compute-2 ceph-mon[77081]: 4.c scrub ok
Jan 22 13:36:48 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:36:48 compute-2 podman[80972]: 2026-01-22 13:36:48.110653081 +0000 UTC m=+1.556004868 container init d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:36:48 compute-2 podman[80972]: 2026-01-22 13:36:48.120695527 +0000 UTC m=+1.566047234 container start d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 13:36:48 compute-2 keen_faraday[80989]: 167 167
Jan 22 13:36:48 compute-2 systemd[1]: libpod-d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823.scope: Deactivated successfully.
Jan 22 13:36:48 compute-2 podman[80972]: 2026-01-22 13:36:48.141548909 +0000 UTC m=+1.586900646 container attach d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 22 13:36:48 compute-2 podman[80972]: 2026-01-22 13:36:48.142695139 +0000 UTC m=+1.588046846 container died d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 13:36:48 compute-2 systemd[1]: var-lib-containers-storage-overlay-542cab6476e614a5425a84c1c9049293258912d4c918c7eefc49479b4459ad2a-merged.mount: Deactivated successfully.
Jan 22 13:36:48 compute-2 podman[80972]: 2026-01-22 13:36:48.212625821 +0000 UTC m=+1.657977528 container remove d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:36:48 compute-2 systemd[1]: libpod-conmon-d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823.scope: Deactivated successfully.
Jan 22 13:36:48 compute-2 systemd[1]: Reloading.
Jan 22 13:36:48 compute-2 systemd-rc-local-generator[81030]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:48 compute-2 systemd-sysv-generator[81037]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:48 compute-2 systemd[1]: Reloading.
Jan 22 13:36:48 compute-2 systemd-sysv-generator[81079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:36:48 compute-2 systemd-rc-local-generator[81075]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:36:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 22 13:36:48 compute-2 systemd[1]: Starting Ceph mds.cephfs.compute-2.zycvef for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:36:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 22 13:36:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 22 13:36:49 compute-2 podman[81134]: 2026-01-22 13:36:49.149774113 +0000 UTC m=+0.062789224 container create 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:36:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.zycvef supports timestamps until 2038 (0x7fffffff)
Jan 22 13:36:49 compute-2 podman[81134]: 2026-01-22 13:36:49.110817201 +0000 UTC m=+0.023832312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:36:49 compute-2 podman[81134]: 2026-01-22 13:36:49.357233106 +0000 UTC m=+0.270248317 container init 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:36:49 compute-2 podman[81134]: 2026-01-22 13:36:49.36454738 +0000 UTC m=+0.277562531 container start 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 13:36:49 compute-2 bash[81134]: 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63
Jan 22 13:36:49 compute-2 ceph-mon[77081]: pgmap v163: 179 pgs: 1 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 22 13:36:49 compute-2 ceph-mon[77081]: 4.10 deep-scrub starts
Jan 22 13:36:49 compute-2 ceph-mon[77081]: 4.10 deep-scrub ok
Jan 22 13:36:49 compute-2 ceph-mon[77081]: 6.e scrub starts
Jan 22 13:36:49 compute-2 ceph-mon[77081]: 6.e scrub ok
Jan 22 13:36:49 compute-2 ceph-mon[77081]: osdmap e51: 3 total, 3 up, 3 in
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-2 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 13:36:49 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 13:36:49 compute-2 ceph-mon[77081]: osdmap e52: 3 total, 3 up, 3 in
Jan 22 13:36:49 compute-2 systemd[1]: Started Ceph mds.cephfs.compute-2.zycvef for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:36:49 compute-2 ceph-mds[81154]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:49 compute-2 ceph-mds[81154]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 22 13:36:49 compute-2 ceph-mds[81154]: main not setting numa affinity
Jan 22 13:36:49 compute-2 ceph-mds[81154]: pidfile_write: ignore empty --pid-file
Jan 22 13:36:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef[81150]: starting mds.cephfs.compute-2.zycvef at 
Jan 22 13:36:49 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 2 from mon.1
Jan 22 13:36:49 compute-2 sudo[80907]: pam_unix(sudo:session): session closed for user root
Jan 22 13:36:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 22 13:36:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 22 13:36:50 compute-2 ceph-mon[77081]: 4.1d scrub starts
Jan 22 13:36:50 compute-2 ceph-mon[77081]: 4.1d scrub ok
Jan 22 13:36:50 compute-2 ceph-mon[77081]: pgmap v166: 180 pgs: 2 creating+peering, 27 peering, 151 active+clean; 451 KiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 22 13:36:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:50 compute-2 ceph-mon[77081]: 6.d scrub starts
Jan 22 13:36:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 13:36:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 13:36:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:50 compute-2 ceph-mon[77081]: 5.12 scrub starts
Jan 22 13:36:50 compute-2 ceph-mon[77081]: 5.12 scrub ok
Jan 22 13:36:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e3 new map
Jan 22 13:36:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:35:18.163248+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.zycvef{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:36:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 3 from mon.1
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Monitors have assigned me to become a standby.
Jan 22 13:36:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 13:36:51 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e4 new map
Jan 22 13:36:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:51.171709+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 4 from mon.1
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.4 handle_mds_map i am now mds.0.4
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x1
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x100
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x600
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x601
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x602
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x603
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x604
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x605
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x606
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x607
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x608
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x609
Jan 22 13:36:51 compute-2 ceph-mds[81154]: mds.0.4 creating_done
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 4.11 scrub starts
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 4.11 scrub ok
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 6.d scrub ok
Jan 22 13:36:52 compute-2 ceph-mon[77081]: Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 6.5 scrub starts
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 6.5 scrub ok
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 4.12 scrub starts
Jan 22 13:36:52 compute-2 ceph-mon[77081]: 4.12 scrub ok
Jan 22 13:36:52 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:boot
Jan 22 13:36:52 compute-2 ceph-mon[77081]: daemon mds.cephfs.compute-2.zycvef assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 13:36:52 compute-2 ceph-mon[77081]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 13:36:52 compute-2 ceph-mon[77081]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 13:36:52 compute-2 ceph-mon[77081]: Cluster is now healthy
Jan 22 13:36:52 compute-2 ceph-mon[77081]: fsmap cephfs:0 1 up:standby
Jan 22 13:36:52 compute-2 ceph-mon[77081]: osdmap e53: 3 total, 3 up, 3 in
Jan 22 13:36:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zycvef"}]: dispatch
Jan 22 13:36:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:creating}
Jan 22 13:36:52 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 13:36:52 compute-2 ceph-mon[77081]: pgmap v168: 181 pgs: 1 unknown, 2 active+clean+laggy, 1 creating+peering, 177 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 345 B/s wr, 4 op/s
Jan 22 13:36:52 compute-2 ceph-mon[77081]: daemon mds.cephfs.compute-2.zycvef is now active in filesystem cephfs as rank 0
Jan 22 13:36:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e5 new map
Jan 22 13:36:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 22 13:36:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 22 13:36:52 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 5 from mon.1
Jan 22 13:36:52 compute-2 ceph-mds[81154]: mds.0.4 handle_mds_map i am now mds.0.4
Jan 22 13:36:52 compute-2 ceph-mds[81154]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 22 13:36:52 compute-2 ceph-mds[81154]: mds.0.4 recovery_done -- successful recovery!
Jan 22 13:36:52 compute-2 ceph-mds[81154]: mds.0.4 active_start
Jan 22 13:36:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 13:36:52 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 22 13:36:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 22 13:36:53 compute-2 ceph-mon[77081]: 4.16 scrub starts
Jan 22 13:36:53 compute-2 ceph-mon[77081]: 4.16 scrub ok
Jan 22 13:36:53 compute-2 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 13:36:53 compute-2 ceph-mon[77081]: osdmap e54: 3 total, 3 up, 3 in
Jan 22 13:36:53 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:active
Jan 22 13:36:53 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active}
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 13:36:53 compute-2 ceph-mon[77081]: 6.2 scrub starts
Jan 22 13:36:53 compute-2 ceph-mon[77081]: 6.2 scrub ok
Jan 22 13:36:53 compute-2 ceph-mon[77081]: 5.b scrub starts
Jan 22 13:36:53 compute-2 ceph-mon[77081]: 5.b scrub ok
Jan 22 13:36:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e6 new map
Jan 22 13:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 22 13:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e7 new map
Jan 22 13:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:36:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:54 compute-2 ceph-mon[77081]: pgmap v170: 181 pgs: 1 unknown, 2 active+clean+laggy, 1 creating+peering, 177 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 362 B/s wr, 4 op/s
Jan 22 13:36:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 13:36:54 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 13:36:54 compute-2 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 13:36:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:54 compute-2 ceph-mon[77081]: osdmap e55: 3 total, 3 up, 3 in
Jan 22 13:36:54 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] up:boot
Jan 22 13:36:54 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 1 up:standby
Jan 22 13:36:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zjixst"}]: dispatch
Jan 22 13:36:54 compute-2 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 13:36:54 compute-2 ceph-mon[77081]: Cluster is now healthy
Jan 22 13:36:54 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 1 up:standby
Jan 22 13:36:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 22 13:36:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 22 13:36:56 compute-2 radosgw[80769]: LDAP not started since no server URIs were provided in the configuration.
Jan 22 13:36:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw[80765]: 2026-01-22T13:36:56.186+0000 7f948b851940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 22 13:36:56 compute-2 radosgw[80769]: framework: beast
Jan 22 13:36:56 compute-2 radosgw[80769]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 22 13:36:56 compute-2 radosgw[80769]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 22 13:36:56 compute-2 radosgw[80769]: starting handler: beast
Jan 22 13:36:56 compute-2 radosgw[80769]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 13:36:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 13:36:56 compute-2 radosgw[80769]: mgrc service_daemon_register rgw.24151 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.gfsxzw,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9ef52632-dffc-43fe-ad78-aca5b0d3574d,zone_name=default,zonegroup_id=961906d1-4e51-43eb-bd43-c4a4ab081aea,zonegroup_name=default}
Jan 22 13:36:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 22 13:36:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 13:36:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 22 13:36:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 22 13:36:57 compute-2 ceph-mds[81154]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 22 13:36:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef[81150]: 2026-01-22T13:36:57.205+0000 7f4cb34e4640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 22 13:36:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 13:36:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 13:36:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:36:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 13:36:57 compute-2 ceph-mon[77081]: 5.d scrub starts
Jan 22 13:36:57 compute-2 ceph-mon[77081]: 5.d scrub ok
Jan 22 13:36:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 13:36:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:36:57 compute-2 ceph-mon[77081]: Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 13:36:57 compute-2 ceph-mon[77081]: pgmap v172: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 5.0 KiB/s wr, 20 op/s
Jan 22 13:36:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:36:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 22 13:36:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 22 13:36:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 22 13:36:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 22 13:36:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:36:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 22 13:36:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 22 13:36:59 compute-2 ceph-mon[77081]: pgmap v173: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 4.5 KiB/s wr, 16 op/s
Jan 22 13:37:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e8 new map
Jan 22 13:37:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:36:52.245537+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:37:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 22 13:37:00 compute-2 ceph-mon[77081]: 7.1d scrub starts
Jan 22 13:37:00 compute-2 ceph-mon[77081]: 7.1d scrub ok
Jan 22 13:37:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:00 compute-2 ceph-mon[77081]: osdmap e56: 3 total, 3 up, 3 in
Jan 22 13:37:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:37:00 compute-2 ceph-mon[77081]: 4.17 scrub starts
Jan 22 13:37:00 compute-2 ceph-mon[77081]: 5.1b scrub starts
Jan 22 13:37:00 compute-2 ceph-mon[77081]: 5.1b scrub ok
Jan 22 13:37:00 compute-2 ceph-mon[77081]: 4.17 scrub ok
Jan 22 13:37:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:00 compute-2 ceph-mon[77081]: pgmap v175: 181 pgs: 2 active+clean+laggy, 179 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 4.0 KiB/s wr, 14 op/s
Jan 22 13:37:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 22 13:37:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 22 13:37:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 22 13:37:02 compute-2 ceph-mon[77081]: 3.0 scrub starts
Jan 22 13:37:02 compute-2 ceph-mon[77081]: 3.0 scrub ok
Jan 22 13:37:02 compute-2 ceph-mon[77081]: 5.14 scrub starts
Jan 22 13:37:02 compute-2 ceph-mon[77081]: 5.14 scrub ok
Jan 22 13:37:02 compute-2 ceph-mon[77081]: 4.d deep-scrub starts
Jan 22 13:37:02 compute-2 ceph-mon[77081]: 4.d deep-scrub ok
Jan 22 13:37:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:02 compute-2 ceph-mon[77081]: osdmap e57: 3 total, 3 up, 3 in
Jan 22 13:37:02 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] up:boot
Jan 22 13:37:02 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 13:37:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ofmmzj"}]: dispatch
Jan 22 13:37:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:37:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 22 13:37:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 22 13:37:03 compute-2 ceph-mon[77081]: 5.17 scrub starts
Jan 22 13:37:03 compute-2 ceph-mon[77081]: 5.17 scrub ok
Jan 22 13:37:03 compute-2 ceph-mon[77081]: pgmap v177: 181 pgs: 2 active+clean+laggy, 179 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 5.2 KiB/s wr, 161 op/s
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:03 compute-2 ceph-mon[77081]: osdmap e58: 3 total, 3 up, 3 in
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 13:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 22 13:37:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e9 new map
Jan 22 13:37:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:37:03.744747+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:37:03 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 9 from mon.1
Jan 22 13:37:04 compute-2 ceph-mon[77081]: 5.0 scrub starts
Jan 22 13:37:04 compute-2 ceph-mon[77081]: 5.0 scrub ok
Jan 22 13:37:04 compute-2 ceph-mon[77081]: Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 13:37:04 compute-2 ceph-mon[77081]: 6.3 scrub starts
Jan 22 13:37:04 compute-2 ceph-mon[77081]: 6.3 scrub ok
Jan 22 13:37:04 compute-2 ceph-mon[77081]: 6.1 scrub starts
Jan 22 13:37:04 compute-2 ceph-mon[77081]: 6.1 scrub ok
Jan 22 13:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 13:37:04 compute-2 ceph-mon[77081]: osdmap e59: 3 total, 3 up, 3 in
Jan 22 13:37:04 compute-2 ceph-mon[77081]: pgmap v180: 243 pgs: 62 unknown, 2 active+clean+laggy, 179 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 3.4 KiB/s wr, 200 op/s
Jan 22 13:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:04 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] up:standby
Jan 22 13:37:04 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] up:active
Jan 22 13:37:04 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 13:37:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 22 13:37:05 compute-2 ceph-mon[77081]: 5.19 scrub starts
Jan 22 13:37:05 compute-2 ceph-mon[77081]: 5.f deep-scrub starts
Jan 22 13:37:05 compute-2 ceph-mon[77081]: 5.f deep-scrub ok
Jan 22 13:37:05 compute-2 ceph-mon[77081]: 5.19 scrub ok
Jan 22 13:37:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 22 13:37:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 22 13:37:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e10 new map
Jan 22 13:37:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-22T13:35:18.163168+0000
                                           modified        2026-01-22T13:37:03.744747+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 13:37:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 22 13:37:06 compute-2 ceph-mon[77081]: 4.1e scrub starts
Jan 22 13:37:06 compute-2 ceph-mon[77081]: 4.1e scrub ok
Jan 22 13:37:06 compute-2 ceph-mon[77081]: 4.1a scrub starts
Jan 22 13:37:06 compute-2 ceph-mon[77081]: 4.1a scrub ok
Jan 22 13:37:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 13:37:06 compute-2 ceph-mon[77081]: osdmap e60: 3 total, 3 up, 3 in
Jan 22 13:37:06 compute-2 ceph-mon[77081]: pgmap v182: 305 pgs: 1 peering, 62 unknown, 2 active+clean+laggy, 240 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 98 op/s
Jan 22 13:37:07 compute-2 ceph-mon[77081]: 5.1d deep-scrub starts
Jan 22 13:37:07 compute-2 ceph-mon[77081]: 5.1d deep-scrub ok
Jan 22 13:37:07 compute-2 ceph-mon[77081]: 4.19 scrub starts
Jan 22 13:37:07 compute-2 ceph-mon[77081]: 4.19 scrub ok
Jan 22 13:37:07 compute-2 ceph-mon[77081]: mds.? [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] up:standby
Jan 22 13:37:07 compute-2 ceph-mon[77081]: fsmap cephfs:1 {0=cephfs.compute-2.zycvef=up:active} 2 up:standby
Jan 22 13:37:07 compute-2 ceph-mon[77081]: osdmap e61: 3 total, 3 up, 3 in
Jan 22 13:37:07 compute-2 ceph-mon[77081]: pgmap v184: 305 pgs: 1 peering, 62 unknown, 2 active+clean+laggy, 240 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 91 op/s
Jan 22 13:37:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 22 13:37:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 22 13:37:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 22 13:37:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 22 13:37:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000079s ======
Jan 22 13:37:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:10.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 22 13:37:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:11 compute-2 ceph-mon[77081]: 4.1c scrub starts
Jan 22 13:37:11 compute-2 ceph-mon[77081]: 4.1c scrub ok
Jan 22 13:37:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.1e( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.1c( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.2( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.3( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.16( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.a( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.9( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.11( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.b( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.4( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.6( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.19( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.1f( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.10( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.11( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.f( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.e( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.d( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.a( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.f( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.1( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.3( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.12( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.13( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.16( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.15( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.5( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.17( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.c( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 5.1e scrub starts
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 5.1e scrub ok
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 5.7 scrub starts
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 5.7 scrub ok
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 7.a scrub starts
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 7.a scrub ok
Jan 22 13:37:13 compute-2 ceph-mon[77081]: pgmap v185: 305 pgs: 1 peering, 31 unknown, 2 active+clean+laggy, 271 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 82 op/s
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 6.4 scrub starts
Jan 22 13:37:13 compute-2 ceph-mon[77081]: 6.4 scrub ok
Jan 22 13:37:13 compute-2 ceph-mon[77081]: pgmap v186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 13:37:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 13:37:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:37:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 22 13:37:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:14.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:16.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.552034855s, txc = 0x55735af63200
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551962852s, txc = 0x557359b88300
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551544666s, txc = 0x557359b88c00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551220417s, txc = 0x55735a61cf00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551022530s, txc = 0x55735a27e300
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.550522327s, txc = 0x557359bde000
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.550064087s, txc = 0x55735af63500
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.549757481s, txc = 0x55735a61d200
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.549332619s, txc = 0x55735a61d500
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.549151421s, txc = 0x55735a27e600
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548954487s, txc = 0x55735a2f6000
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548785686s, txc = 0x55735a2f6300
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548469543s, txc = 0x557359b88f00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548099995s, txc = 0x55735af63800
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.547780991s, txc = 0x55735a61d800
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.547410965s, txc = 0x557359bde300
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.547196388s, txc = 0x55735a2f6600
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.546813488s, txc = 0x557359b89200
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.546355247s, txc = 0x557359b89500
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545954704s, txc = 0x55735a61db00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545588970s, txc = 0x55735a7fcf00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545316696s, txc = 0x55735a8acf00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545132637s, txc = 0x55735a8ad200
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.544943810s, txc = 0x55735a8ad500
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.544633865s, txc = 0x55735a7fc300
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.544329643s, txc = 0x55735a7fd200
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543985367s, txc = 0x55735a7fd800
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543646336s, txc = 0x55735a7fc600
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543431759s, txc = 0x55735a8ad800
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543244839s, txc = 0x55735a8adb00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.542670727s, txc = 0x55735b635200
Jan 22 13:37:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723423004s, txc = 0x557359bde600
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723418713s, txc = 0x55735a7fdb00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723375797s, txc = 0x55735b508000
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723512173s, txc = 0x55735b226f00
Jan 22 13:37:19 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723829269s, txc = 0x55735af63b00
Jan 22 13:37:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 22 13:37:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:20.229+0000 7f47f8ed4640 -1 osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:20 compute-2 ceph-osd[79779]: osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:20 compute-2 ceph-mon[77081]: 6.6 scrub starts
Jan 22 13:37:20 compute-2 ceph-mon[77081]: 6.6 scrub ok
Jan 22 13:37:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:37:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:37:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 13:37:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:37:20 compute-2 ceph-mon[77081]: 6.9 scrub starts
Jan 22 13:37:20 compute-2 ceph-mon[77081]: 6.7 scrub starts
Jan 22 13:37:20 compute-2 ceph-mon[77081]: 6.7 scrub ok
Jan 22 13:37:20 compute-2 ceph-mon[77081]: 6.9 scrub ok
Jan 22 13:37:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:20 compute-2 ceph-mon[77081]: osdmap e62: 3 total, 3 up, 3 in
Jan 22 13:37:20 compute-2 ceph-mon[77081]: pgmap v188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 13:37:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 13:37:20 compute-2 sudo[81738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:20 compute-2 sudo[81738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:20 compute-2 sudo[81738]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:20 compute-2 sudo[81763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:20 compute-2 sudo[81763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:20 compute-2 sudo[81763]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:20 compute-2 sudo[81788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:20 compute-2 sudo[81788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:20 compute-2 sudo[81788]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:20 compute-2 sudo[81813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:37:20 compute-2 sudo[81813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:21.262+0000 7f47f8ed4640 -1 osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:21 compute-2 ceph-osd[79779]: osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:22.310+0000 7f47f8ed4640 -1 osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:22 compute-2 ceph-osd[79779]: osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:37:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:37:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 6.b scrub starts
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 6.b scrub ok
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 7.14 scrub starts
Jan 22 13:37:23 compute-2 ceph-mon[77081]: pgmap v189: 305 pgs: 30 peering, 2 active+clean+laggy, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 0 B/s wr, 95 op/s
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 6.c scrub starts
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 6.c scrub ok
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 6.f scrub starts
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 6.f scrub ok
Jan 22 13:37:23 compute-2 ceph-mon[77081]: pgmap v190: 305 pgs: 30 peering, 2 active+clean+laggy, 273 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 0 B/s wr, 83 op/s
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 7.1b scrub starts
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 7.1b scrub ok
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 7.13 scrub starts
Jan 22 13:37:23 compute-2 ceph-mon[77081]: 7.13 scrub ok
Jan 22 13:37:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 13:37:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:23 compute-2 ceph-mon[77081]: osdmap e63: 3 total, 3 up, 3 in
Jan 22 13:37:23 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.1c( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:23 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.2( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:23.287+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:23 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:24.263+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.9( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.3( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.a( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.8( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.16( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.b( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.6( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.19( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.e( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.11( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.3( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.a( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.13( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.d( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.16( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.1f( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.15( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.17( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.5( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=62/64 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.f( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=62/64 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.3( v 61'99 lc 57'84 (0'0,61'99] local-lis/les=62/64 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=61'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.c( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 7.14 scrub ok
Jan 22 13:37:24 compute-2 ceph-mon[77081]: pgmap v192: 305 pgs: 1 active+clean+scrubbing, 61 peering, 2 active+clean+laggy, 241 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 50 op/s
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 7.10 scrub starts
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 7.10 scrub ok
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:24 compute-2 ceph-mon[77081]: Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 7.1e scrub starts
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 7.1e scrub ok
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-2 ceph-mon[77081]: pgmap v193: 305 pgs: 1 active+clean+scrubbing, 52 peering, 2 active+clean+laggy, 250 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 48 op/s; 0 B/s, 0 objects/s recovering
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 4.5 scrub starts
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 4.5 scrub ok
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:24 compute-2 ceph-mon[77081]: osdmap e64: 3 total, 3 up, 3 in
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 4.e scrub starts
Jan 22 13:37:24 compute-2 ceph-mon[77081]: 4.e scrub ok
Jan 22 13:37:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:24.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:25 compute-2 sshd-session[81915]: Accepted publickey for zuul from 192.168.122.30 port 51762 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:37:25 compute-2 systemd-logind[787]: New session 33 of user zuul.
Jan 22 13:37:25 compute-2 systemd[1]: Started Session 33 of User zuul.
Jan 22 13:37:25 compute-2 sshd-session[81915]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:37:25 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:25.213+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:25 compute-2 ceph-mon[77081]: pgmap v195: 305 pgs: 1 active+clean+scrubbing, 52 peering, 2 active+clean+laggy, 250 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:37:25 compute-2 ceph-mon[77081]: 4.1b deep-scrub starts
Jan 22 13:37:25 compute-2 ceph-mon[77081]: 4.1b deep-scrub ok
Jan 22 13:37:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:26 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 22 13:37:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:26.200+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 22 13:37:26 compute-2 python3.9[82084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:37:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:26 compute-2 ceph-mon[77081]: 4.14 scrub starts
Jan 22 13:37:26 compute-2 ceph-mon[77081]: 4.14 scrub ok
Jan 22 13:37:27 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:27.180+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.27433326 +0000 UTC m=+5.942712441 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.298491143 +0000 UTC m=+5.966870294 container create c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 13:37:27 compute-2 systemd[1]: Started libpod-conmon-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope.
Jan 22 13:37:27 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.403955092 +0000 UTC m=+6.072334273 container init c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.413378267 +0000 UTC m=+6.081757438 container start c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.41868963 +0000 UTC m=+6.087068801 container attach c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 13:37:27 compute-2 loving_goodall[82239]: 0 0
Jan 22 13:37:27 compute-2 systemd[1]: libpod-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope: Deactivated successfully.
Jan 22 13:37:27 compute-2 conmon[82239]: conmon c7e98b80654d1d1e4c8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope/container/memory.events
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.421131696 +0000 UTC m=+6.089510877 container died c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 13:37:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-7afb65ef436f8de8211342ae0f3f01e8b45e5591ea29bd0d6446be2c2825b425-merged.mount: Deactivated successfully.
Jan 22 13:37:27 compute-2 podman[81879]: 2026-01-22 13:37:27.470664065 +0000 UTC m=+6.139043216 container remove c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 13:37:27 compute-2 systemd[1]: libpod-conmon-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope: Deactivated successfully.
Jan 22 13:37:27 compute-2 systemd[1]: Reloading.
Jan 22 13:37:27 compute-2 systemd-sysv-generator[82340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:37:27 compute-2 systemd-rc-local-generator[82334]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:37:27 compute-2 systemd[1]: Reloading.
Jan 22 13:37:27 compute-2 systemd-rc-local-generator[82453]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:37:27 compute-2 systemd-sysv-generator[82456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:37:28 compute-2 sudo[82425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnuhsadptqqxjfygnxqacjpghlmucjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089047.422827-60-185526240605835/AnsiballZ_command.py'
Jan 22 13:37:28 compute-2 sudo[82425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:37:28 compute-2 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.zogxki for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:37:28 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:28.217+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:28 compute-2 python3.9[82463]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:37:28 compute-2 podman[82513]: 2026-01-22 13:37:28.303426176 +0000 UTC m=+0.021754869 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 13:37:28 compute-2 podman[82513]: 2026-01-22 13:37:28.507057928 +0000 UTC m=+0.225386601 container create ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:37:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:28.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:28 compute-2 ceph-mon[77081]: pgmap v196: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 161 B/s, 0 objects/s recovering
Jan 22 13:37:28 compute-2 ceph-mon[77081]: Health check failed: 2 slow ops, oldest one blocked for 36 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:28 compute-2 ceph-mon[77081]: 5.9 scrub starts
Jan 22 13:37:28 compute-2 ceph-mon[77081]: 5.9 scrub ok
Jan 22 13:37:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb07ffd7b803d428dbe6adac05a87d7037dad80cef11765c51e4ad5be67c2ac1/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 22 13:37:29 compute-2 podman[82513]: 2026-01-22 13:37:29.014283463 +0000 UTC m=+0.732612166 container init ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:37:29 compute-2 podman[82513]: 2026-01-22 13:37:29.020564172 +0000 UTC m=+0.738892845 container start ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:37:29 compute-2 bash[82513]: ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f
Jan 22 13:37:29 compute-2 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.zogxki for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:37:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki[82538]: [NOTICE] 021/133729 (2) : New worker #1 (4) forked
Jan 22 13:37:29 compute-2 sudo[81813]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:29.207+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:29 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:30 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:30.193+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:30.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:31 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 22 13:37:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:31.201+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 22 13:37:32 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:32.173+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:32.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:32 compute-2 ceph-mon[77081]: pgmap v197: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 161 B/s, 0 objects/s recovering
Jan 22 13:37:32 compute-2 ceph-mon[77081]: 7.b scrub starts
Jan 22 13:37:32 compute-2 ceph-mon[77081]: 7.b scrub ok
Jan 22 13:37:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:32 compute-2 ceph-mon[77081]: 7.8 scrub starts
Jan 22 13:37:32 compute-2 ceph-mon[77081]: 7.8 scrub ok
Jan 22 13:37:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:33 compute-2 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:33.125+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 4.a deep-scrub starts
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 4.a deep-scrub ok
Jan 22 13:37:33 compute-2 ceph-mon[77081]: pgmap v198: 305 pgs: 29 activating, 2 active+clean+laggy, 274 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6/217 objects degraded (2.765%); 129 B/s, 0 objects/s recovering
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 5.13 scrub starts
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 5.13 scrub ok
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 5.15 deep-scrub starts
Jan 22 13:37:33 compute-2 ceph-mon[77081]: pgmap v199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 268 B/s, 0 objects/s recovering
Jan 22 13:37:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 5.15 deep-scrub ok
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 5.11 scrub starts
Jan 22 13:37:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 41 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 65 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:34.079+0000 7f47f8ed4640 -1 osd.2 65 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:34.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:34 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:35 compute-2 ceph-osd[79779]: osd.2 66 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:35.039+0000 7f47f8ed4640 -1 osd.2 66 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 5.11 scrub ok
Jan 22 13:37:35 compute-2 ceph-mon[77081]: pgmap v200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 252 B/s, 0 objects/s recovering
Jan 22 13:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 13:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 13:37:35 compute-2 ceph-mon[77081]: osdmap e65: 3 total, 3 up, 3 in
Jan 22 13:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 13:37:35 compute-2 ceph-mon[77081]: Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 7.2 scrub starts
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 7.2 scrub ok
Jan 22 13:37:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 22 13:37:36 compute-2 ceph-osd[79779]: osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:35.999+0000 7f47f8ed4640 -1 osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 13:37:36 compute-2 ceph-mon[77081]: osdmap e66: 3 total, 3 up, 3 in
Jan 22 13:37:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:36 compute-2 ceph-mon[77081]: pgmap v203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 174 B/s, 0 objects/s recovering
Jan 22 13:37:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 13:37:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 13:37:36 compute-2 ceph-mon[77081]: osdmap e67: 3 total, 3 up, 3 in
Jan 22 13:37:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:36 compute-2 sudo[82425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:36 compute-2 ceph-osd[79779]: osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:36.994+0000 7f47f8ed4640 -1 osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:37 compute-2 ceph-mon[77081]: 5.16 scrub starts
Jan 22 13:37:37 compute-2 ceph-mon[77081]: 5.16 scrub ok
Jan 22 13:37:37 compute-2 ceph-mon[77081]: 7.9 scrub starts
Jan 22 13:37:37 compute-2 ceph-mon[77081]: 7.9 scrub ok
Jan 22 13:37:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:37 compute-2 ceph-mon[77081]: osdmap e68: 3 total, 3 up, 3 in
Jan 22 13:37:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 13:37:37 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 47 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:37 compute-2 ceph-osd[79779]: osd.2 68 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:37.994+0000 7f47f8ed4640 -1 osd.2 68 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 22 13:37:38 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:38 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:38 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:38 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:38.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:38.967+0000 7f47f8ed4640 -1 osd.2 69 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:38 compute-2 ceph-osd[79779]: osd.2 69 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:39 compute-2 sshd-session[81928]: Connection closed by 192.168.122.30 port 51762
Jan 22 13:37:39 compute-2 sshd-session[81915]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:37:39 compute-2 systemd[1]: session-33.scope: Deactivated successfully.
Jan 22 13:37:39 compute-2 systemd[1]: session-33.scope: Consumed 8.983s CPU time.
Jan 22 13:37:39 compute-2 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Jan 22 13:37:39 compute-2 systemd-logind[787]: Removed session 33.
Jan 22 13:37:39 compute-2 ceph-mon[77081]: pgmap v206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:39 compute-2 ceph-mon[77081]: 7.e scrub starts
Jan 22 13:37:39 compute-2 ceph-mon[77081]: 7.e scrub ok
Jan 22 13:37:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:39 compute-2 ceph-mon[77081]: 5.1f scrub starts
Jan 22 13:37:39 compute-2 ceph-mon[77081]: 5.1f scrub ok
Jan 22 13:37:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 13:37:39 compute-2 ceph-mon[77081]: osdmap e69: 3 total, 3 up, 3 in
Jan 22 13:37:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'704 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'704 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=61'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=0/0 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=61'686 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'698 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'698 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=0/0 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'686 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:39 compute-2 sudo[82598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:39 compute-2 sudo[82598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:39 compute-2 sudo[82598]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:39 compute-2 sudo[82623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:39 compute-2 sudo[82623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:39 compute-2 sudo[82623]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:39 compute-2 sudo[82648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:39 compute-2 sudo[82648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:39 compute-2 sudo[82648]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:39 compute-2 sudo[82673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:37:39 compute-2 sudo[82673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:39.994+0000 7f47f8ed4640 -1 osd.2 70 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:39 compute-2 ceph-osd[79779]: osd.2 70 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 22 13:37:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 22 13:37:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:40 compute-2 ceph-mon[77081]: 6.a scrub starts
Jan 22 13:37:40 compute-2 ceph-mon[77081]: 6.a scrub ok
Jan 22 13:37:40 compute-2 ceph-mon[77081]: osdmap e70: 3 total, 3 up, 3 in
Jan 22 13:37:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:40 compute-2 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 13:37:40 compute-2 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 13:37:40 compute-2 ceph-mon[77081]: Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'704 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=70/71 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'698 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'686 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:40.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.e deep-scrub starts
Jan 22 13:37:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:41.019+0000 7f47f8ed4640 -1 osd.2 71 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 71 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.e deep-scrub ok
Jan 22 13:37:41 compute-2 ceph-mon[77081]: pgmap v209: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:41 compute-2 ceph-mon[77081]: 4.9 scrub starts
Jan 22 13:37:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:41 compute-2 ceph-mon[77081]: 4.9 scrub ok
Jan 22 13:37:41 compute-2 ceph-mon[77081]: osdmap e71: 3 total, 3 up, 3 in
Jan 22 13:37:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:41 compute-2 ceph-mon[77081]: 5.e deep-scrub starts
Jan 22 13:37:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:41 compute-2 ceph-mon[77081]: 5.e deep-scrub ok
Jan 22 13:37:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:41 compute-2 ceph-osd[79779]: osd.2 72 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:41.975+0000 7f47f8ed4640 -1 osd.2 72 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:42.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 22 13:37:42 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:42 compute-2 ceph-mon[77081]: pgmap v211: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:42 compute-2 ceph-mon[77081]: osdmap e72: 3 total, 3 up, 3 in
Jan 22 13:37:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:42 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:42 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=72/73 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:42 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:37:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:42.940+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:42 compute-2 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.262776428 +0000 UTC m=+3.369691115 container create 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, distribution-scope=public, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 22 13:37:43 compute-2 systemd[1]: Started libpod-conmon-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope.
Jan 22 13:37:43 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.24804461 +0000 UTC m=+3.354959327 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.328922334 +0000 UTC m=+3.435837051 container init 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.337732573 +0000 UTC m=+3.444647260 container start 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, release=1793, description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.341809713 +0000 UTC m=+3.448724430 container attach 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 13:37:43 compute-2 ecstatic_blackburn[82833]: 0 0
Jan 22 13:37:43 compute-2 systemd[1]: libpod-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope: Deactivated successfully.
Jan 22 13:37:43 compute-2 conmon[82833]: conmon 20c76435c0199d02c263 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope/container/memory.events
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.345712558 +0000 UTC m=+3.452627245 container died 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, distribution-scope=public, vcs-type=git, release=1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc.)
Jan 22 13:37:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-07f1cae9fbe0e12c5bc10793688baa22e59996ef0636b0428cf808f6c4a4d983-merged.mount: Deactivated successfully.
Jan 22 13:37:43 compute-2 podman[82737]: 2026-01-22 13:37:43.381490535 +0000 UTC m=+3.488405222 container remove 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, architecture=x86_64, name=keepalived)
Jan 22 13:37:43 compute-2 systemd[1]: libpod-conmon-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope: Deactivated successfully.
Jan 22 13:37:43 compute-2 systemd[1]: Reloading.
Jan 22 13:37:43 compute-2 systemd-sysv-generator[82882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:37:43 compute-2 systemd-rc-local-generator[82879]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:37:43 compute-2 systemd[1]: Reloading.
Jan 22 13:37:43 compute-2 systemd-rc-local-generator[82921]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:37:43 compute-2 systemd-sysv-generator[82926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:37:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:43.919+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:43 compute-2 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:43 compute-2 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.xbsrtt for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 13:37:44 compute-2 podman[82977]: 2026-01-22 13:37:44.193479316 +0000 UTC m=+0.039285602 container create 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Jan 22 13:37:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a358ca8d9286b2c87ed8309fad35a1ad1ec5603e0132fed2f4d7473a5334162f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:37:44 compute-2 podman[82977]: 2026-01-22 13:37:44.249709696 +0000 UTC m=+0.095516002 container init 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., version=2.2.4)
Jan 22 13:37:44 compute-2 podman[82977]: 2026-01-22 13:37:44.254955858 +0000 UTC m=+0.100762144 container start 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Jan 22 13:37:44 compute-2 bash[82977]: 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4
Jan 22 13:37:44 compute-2 podman[82977]: 2026-01-22 13:37:44.175198702 +0000 UTC m=+0.021005008 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 13:37:44 compute-2 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.xbsrtt for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Starting VRRP child process, pid=4
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Startup complete
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: (VI_0) Entering BACKUP STATE (init)
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: VRRP_Script(check_backend) succeeded
Jan 22 13:37:44 compute-2 sudo[82673]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:44.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:44.889+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:44 compute-2 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:45.927+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:45 compute-2 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:46 compute-2 ceph-mon[77081]: 7.3 deep-scrub starts
Jan 22 13:37:46 compute-2 ceph-mon[77081]: 7.3 deep-scrub ok
Jan 22 13:37:46 compute-2 ceph-mon[77081]: osdmap e73: 3 total, 3 up, 3 in
Jan 22 13:37:46 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 52 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 22 13:37:46 compute-2 sudo[83001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-2 sudo[83001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83001]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:46.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:46 compute-2 sudo[83026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-2 sudo[83026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83026]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 sudo[83051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-2 sudo[83051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83051]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 sudo[83076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:37:46 compute-2 sudo[83076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 sudo[83101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-2 sudo[83101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83101]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 sudo[83126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:46 compute-2 sudo[83126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83126]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:46 compute-2 sudo[83151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:46 compute-2 sudo[83151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 sudo[83151]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:46 compute-2 sudo[83176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:37:46 compute-2 sudo[83176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:46.967+0000 7f47f8ed4640 -1 osd.2 74 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:46 compute-2 ceph-osd[79779]: osd.2 74 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.6 scrub starts
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.6 scrub ok
Jan 22 13:37:47 compute-2 ceph-mon[77081]: pgmap v214: 305 pgs: 4 unknown, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.18 scrub starts
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.18 scrub ok
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.4 deep-scrub starts
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-2 ceph-mon[77081]: pgmap v215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 682 B/s wr, 52 op/s; 300 B/s, 10 objects/s recovering
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.f scrub starts
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.4 deep-scrub ok
Jan 22 13:37:47 compute-2 ceph-mon[77081]: 7.f scrub ok
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-2 ceph-mon[77081]: osdmap e74: 3 total, 3 up, 3 in
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]: dispatch
Jan 22 13:37:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]: dispatch
Jan 22 13:37:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 22 13:37:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 3314933000854323200, adjusting msgr requires
Jan 22 13:37:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 13:37:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 13:37:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 13:37:47 compute-2 ceph-osd[79779]: osd.2 75 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 22 13:37:47 compute-2 ceph-osd[79779]: osd.2 75 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 22 13:37:47 compute-2 ceph-osd[79779]: osd.2 75 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 22 13:37:47 compute-2 podman[83271]: 2026-01-22 13:37:47.477805243 +0000 UTC m=+0.104746662 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 13:37:47 compute-2 podman[83271]: 2026-01-22 13:37:47.785547568 +0000 UTC m=+0.412489017 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:37:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 22 13:37:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:47 2026: (VI_0) Entering MASTER STATE
Jan 22 13:37:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:47 2026: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Jan 22 13:37:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:47 2026: (VI_0) Entering BACKUP STATE
Jan 22 13:37:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:47.962+0000 7f47f8ed4640 -1 osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:47 compute-2 ceph-osd[79779]: osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:48 compute-2 ceph-mon[77081]: 8.1 scrub starts
Jan 22 13:37:48 compute-2 ceph-mon[77081]: 8.1 scrub ok
Jan 22 13:37:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]': finished
Jan 22 13:37:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]': finished
Jan 22 13:37:48 compute-2 ceph-mon[77081]: osdmap e75: 3 total, 3 up, 3 in
Jan 22 13:37:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 13:37:48 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 57 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 13:37:48 compute-2 ceph-mon[77081]: osdmap e76: 3 total, 3 up, 3 in
Jan 22 13:37:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:48 compute-2 podman[83427]: 2026-01-22 13:37:48.790415042 +0000 UTC m=+0.321447448 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:37:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:37:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:48.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:37:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:48.991+0000 7f47f8ed4640 -1 osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:48 compute-2 ceph-osd[79779]: osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:49 compute-2 podman[83427]: 2026-01-22 13:37:49.357799313 +0000 UTC m=+0.888831699 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:37:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:49 compute-2 podman[83493]: 2026-01-22 13:37:49.759643582 +0000 UTC m=+0.160746615 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, release=1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 13:37:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:50.019+0000 7f47f8ed4640 -1 osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:50 compute-2 ceph-osd[79779]: osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 22 13:37:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 22 13:37:50 compute-2 podman[83493]: 2026-01-22 13:37:50.061987611 +0000 UTC m=+0.463090574 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64)
Jan 22 13:37:50 compute-2 sudo[83176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:50.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 22 13:37:50 compute-2 ceph-mon[77081]: pgmap v218: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 685 B/s wr, 53 op/s; 301 B/s, 10 objects/s recovering
Jan 22 13:37:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:50 compute-2 ceph-mon[77081]: 8.7 scrub starts
Jan 22 13:37:50 compute-2 ceph-mon[77081]: 8.7 scrub ok
Jan 22 13:37:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:50 compute-2 sudo[83528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:50 compute-2 sudo[83528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:50 compute-2 sudo[83528]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:50 compute-2 sudo[83553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:50 compute-2 sudo[83553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:50 compute-2 sudo[83553]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:50 compute-2 sudo[83578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:50 compute-2 sudo[83578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:50 compute-2 sudo[83578]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:50 compute-2 sudo[83603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:37:50 compute-2 sudo[83603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:51.000+0000 7f47f8ed4640 -1 osd.2 77 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:51 compute-2 ceph-osd[79779]: osd.2 77 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 22 13:37:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 22 13:37:51 compute-2 sudo[83603]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:51 compute-2 sudo[83659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:51 compute-2 sudo[83659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:51 compute-2 sudo[83659]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 22 13:37:51 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:51 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:51 compute-2 sudo[83684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:37:51 compute-2 sudo[83684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:51 compute-2 sudo[83684]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:51 compute-2 sudo[83709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:37:51 compute-2 sudo[83709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:51 compute-2 sudo[83709]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:51 compute-2 sudo[83734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 13:37:51 compute-2 sudo[83734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:37:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:52.002+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:52 compute-2 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 22 13:37:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 22 13:37:52 compute-2 podman[83799]: 2026-01-22 13:37:51.945529967 +0000 UTC m=+0.023905707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:37:52 compute-2 ceph-mon[77081]: pgmap v220: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 682 B/s wr, 52 op/s; 300 B/s, 10 objects/s recovering
Jan 22 13:37:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 13:37:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:52 compute-2 ceph-mon[77081]: 7.1f scrub starts
Jan 22 13:37:52 compute-2 ceph-mon[77081]: 7.1f scrub ok
Jan 22 13:37:52 compute-2 ceph-mon[77081]: osdmap e77: 3 total, 3 up, 3 in
Jan 22 13:37:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:52 compute-2 ceph-mon[77081]: 4.15 scrub starts
Jan 22 13:37:52 compute-2 ceph-mon[77081]: 4.15 scrub ok
Jan 22 13:37:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:37:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 22 13:37:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:52.970+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:52 compute-2 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:54.006+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:54 compute-2 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 22 13:37:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:54.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:54.997+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:54 compute-2 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:55 compute-2 sshd-session[83816]: Accepted publickey for zuul from 192.168.122.30 port 52944 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:37:55 compute-2 systemd-logind[787]: New session 34 of user zuul.
Jan 22 13:37:55 compute-2 systemd[1]: Started Session 34 of User zuul.
Jan 22 13:37:55 compute-2 sshd-session[83816]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:37:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 22 13:37:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 22 13:37:55 compute-2 podman[83799]: 2026-01-22 13:37:55.807864113 +0000 UTC m=+3.886239823 container create bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 13:37:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:37:55 compute-2 python3.9[83969]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 13:37:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 22 13:37:56 compute-2 systemd[1]: Started libpod-conmon-bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c.scope.
Jan 22 13:37:56 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:37:56 compute-2 podman[83799]: 2026-01-22 13:37:56.261297386 +0000 UTC m=+4.339673106 container init bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:37:56 compute-2 podman[83799]: 2026-01-22 13:37:56.270235497 +0000 UTC m=+4.348611237 container start bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:37:56 compute-2 brave_allen[83980]: 167 167
Jan 22 13:37:56 compute-2 systemd[1]: libpod-bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c.scope: Deactivated successfully.
Jan 22 13:37:56 compute-2 ceph-mon[77081]: 5.1c scrub starts
Jan 22 13:37:56 compute-2 ceph-mon[77081]: 5.1c scrub ok
Jan 22 13:37:56 compute-2 ceph-mon[77081]: 8.e deep-scrub starts
Jan 22 13:37:56 compute-2 ceph-mon[77081]: 8.e deep-scrub ok
Jan 22 13:37:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 13:37:56 compute-2 ceph-mon[77081]: pgmap v222: 305 pgs: 2 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 20 B/s, 2 objects/s recovering
Jan 22 13:37:56 compute-2 ceph-mon[77081]: osdmap e78: 3 total, 3 up, 3 in
Jan 22 13:37:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 13:37:56 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 62 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:37:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:56.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:56 compute-2 podman[83799]: 2026-01-22 13:37:56.598129068 +0000 UTC m=+4.676504778 container attach bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:37:56 compute-2 podman[83799]: 2026-01-22 13:37:56.598928419 +0000 UTC m=+4.677304129 container died bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:37:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:56.795+0000 7f47f8ed4640 -1 osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:56 compute-2 ceph-osd[79779]: osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:37:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:37:57 compute-2 python3.9[84161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:37:57 compute-2 systemd[1]: var-lib-containers-storage-overlay-f69366e925332ba90e232e1f47aae5c36a924131bfc9a785f975f41f6d41b78e-merged.mount: Deactivated successfully.
Jan 22 13:37:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 22 13:37:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:57.749+0000 7f47f8ed4640 -1 osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:37:57 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:37:58 compute-2 sudo[84316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjzpbauuovcfrbnliekwtjtfublsidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089077.946947-96-116852656727264/AnsiballZ_command.py'
Jan 22 13:37:58 compute-2 sudo[84316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:37:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:58 compute-2 podman[83799]: 2026-01-22 13:37:58.42530357 +0000 UTC m=+6.503679280 container remove bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 4.1f scrub starts
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 4.1f scrub ok
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 4.8 scrub starts
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 5.18 scrub starts
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 5.18 scrub ok
Jan 22 13:37:58 compute-2 ceph-mon[77081]: pgmap v224: 305 pgs: 2 active+remapped, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 13:37:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 5.4 scrub starts
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-2 ceph-mon[77081]: pgmap v225: 305 pgs: 2 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 43 B/s, 4 objects/s recovering
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 4.8 scrub ok
Jan 22 13:37:58 compute-2 ceph-mon[77081]: 5.4 scrub ok
Jan 22 13:37:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 13:37:58 compute-2 ceph-mon[77081]: osdmap e79: 3 total, 3 up, 3 in
Jan 22 13:37:58 compute-2 systemd[1]: libpod-conmon-bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c.scope: Deactivated successfully.
Jan 22 13:37:58 compute-2 python3.9[84318]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:37:58 compute-2 sudo[84316]: pam_unix(sudo:session): session closed for user root
Jan 22 13:37:58 compute-2 podman[84326]: 2026-01-22 13:37:58.564856671 +0000 UTC m=+0.027373651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:37:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:58.744+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:58 compute-2 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 22 13:37:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:37:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:37:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:37:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 22 13:37:59 compute-2 sshd-session[84341]: Invalid user solana from 92.118.39.95 port 51910
Jan 22 13:37:59 compute-2 podman[84326]: 2026-01-22 13:37:59.160785585 +0000 UTC m=+0.623302575 container create 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 13:37:59 compute-2 sshd-session[84341]: Connection closed by invalid user solana 92.118.39.95 port 51910 [preauth]
Jan 22 13:37:59 compute-2 sudo[84492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abdodjqcombzgahxoxmcaukdzgjkrbcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089079.134736-131-210788620526780/AnsiballZ_stat.py'
Jan 22 13:37:59 compute-2 sudo[84492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:37:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:59.706+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:59 compute-2 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:37:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:37:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 22 13:37:59 compute-2 python3.9[84494]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:38:00 compute-2 sudo[84492]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:00.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:00 compute-2 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:00.706+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 22 13:38:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 22 13:38:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 22 13:38:00 compute-2 sudo[84647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmgvofjniblphztpjkiluxjbrxedmpli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089080.4079998-165-271833765118154/AnsiballZ_file.py'
Jan 22 13:38:00 compute-2 sudo[84647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:01 compute-2 python3.9[84649]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:38:01 compute-2 sudo[84647]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:01 compute-2 systemd[1]: Started libpod-conmon-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope.
Jan 22 13:38:01 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:38:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 13:38:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 13:38:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 13:38:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 13:38:01 compute-2 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:01.731+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:01 compute-2 sudo[84804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqukzjupjyzbkmcsgltjmafbicptymcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089081.3720994-192-167138984428037/AnsiballZ_file.py'
Jan 22 13:38:01 compute-2 sudo[84804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:02 compute-2 python3.9[84806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:38:02 compute-2 sudo[84804]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 22 13:38:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:02.763+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:02 compute-2 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 22 13:38:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:02.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 22 13:38:03 compute-2 ceph-mds[81154]: mds.beacon.cephfs.compute-2.zycvef missed beacon ack from the monitors
Jan 22 13:38:03 compute-2 python3.9[84957]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:38:03 compute-2 podman[84326]: 2026-01-22 13:38:03.47274235 +0000 UTC m=+4.935259310 container init 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:38:03 compute-2 podman[84326]: 2026-01-22 13:38:03.483433809 +0000 UTC m=+4.945950759 container start 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 13:38:03 compute-2 network[84976]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:38:03 compute-2 network[84977]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:38:03 compute-2 network[84978]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:38:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:03 compute-2 podman[84326]: 2026-01-22 13:38:03.780039164 +0000 UTC m=+5.242556124 container attach 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 13:38:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:03.780+0000 7f47f8ed4640 -1 osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:03 compute-2 ceph-osd[79779]: osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:03 compute-2 ceph-mon[77081]: pgmap v227: 305 pgs: 2 unknown, 2 active+clean+scrubbing, 2 peering, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 4 objects/s recovering
Jan 22 13:38:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 13:38:03 compute-2 ceph-mon[77081]: osdmap e80: 3 total, 3 up, 3 in
Jan 22 13:38:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 22 13:38:04 compute-2 charming_albattani[84775]: [
Jan 22 13:38:04 compute-2 charming_albattani[84775]:     {
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "available": false,
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "ceph_device": false,
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "lsm_data": {},
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "lvs": [],
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "path": "/dev/sr0",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "rejected_reasons": [
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "Has a FileSystem",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "Insufficient space (<5GB)"
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         ],
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         "sys_api": {
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "actuators": null,
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "device_nodes": "sr0",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "devname": "sr0",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "human_readable_size": "482.00 KB",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "id_bus": "ata",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "model": "QEMU DVD-ROM",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "nr_requests": "2",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "parent": "/dev/sr0",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "partitions": {},
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "path": "/dev/sr0",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "removable": "1",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "rev": "2.5+",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "ro": "0",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "rotational": "1",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "sas_address": "",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "sas_device_handle": "",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "scheduler_mode": "mq-deadline",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "sectors": 0,
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "sectorsize": "2048",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "size": 493568.0,
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "support_discard": "2048",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "type": "disk",
Jan 22 13:38:04 compute-2 charming_albattani[84775]:             "vendor": "QEMU"
Jan 22 13:38:04 compute-2 charming_albattani[84775]:         }
Jan 22 13:38:04 compute-2 charming_albattani[84775]:     }
Jan 22 13:38:04 compute-2 charming_albattani[84775]: ]
Jan 22 13:38:04 compute-2 podman[84326]: 2026-01-22 13:38:04.782777678 +0000 UTC m=+6.245294628 container died 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 13:38:04 compute-2 systemd[1]: libpod-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope: Deactivated successfully.
Jan 22 13:38:04 compute-2 systemd[1]: libpod-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope: Consumed 1.299s CPU time.
Jan 22 13:38:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:04.799+0000 7f47f8ed4640 -1 osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:04 compute-2 ceph-osd[79779]: osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:04.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:05.819+0000 7f47f8ed4640 -1 osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:05 compute-2 ceph-osd[79779]: osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:06 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:06 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:06 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 7.16 scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 7.16 scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 67 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:06 compute-2 ceph-mon[77081]: pgmap v229: 305 pgs: 2 unknown, 2 active+clean+scrubbing, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 2 objects/s recovering
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 5.1a scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 4.13 scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 4.13 scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: osdmap e81: 3 total, 3 up, 3 in
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 10.1e scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 5.1a scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 10.1e scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: pgmap v231: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 8.13 scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 8.13 scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 6.8 scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 6.8 scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 8.2 scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 8.1a deep-scrub starts
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 8.1a deep-scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: 8.2 scrub ok
Jan 22 13:38:06 compute-2 ceph-mon[77081]: pgmap v232: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 13:38:06 compute-2 ceph-mon[77081]: osdmap e82: 3 total, 3 up, 3 in
Jan 22 13:38:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:06 compute-2 sudo[86185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:06 compute-2 sudo[86185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:06 compute-2 sudo[86185]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:06 compute-2 sudo[86214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:06 compute-2 sudo[86214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:06 compute-2 sudo[86214]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:06.797+0000 7f47f8ed4640 -1 osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:06 compute-2 ceph-osd[79779]: osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:06 compute-2 systemd[1]: var-lib-containers-storage-overlay-4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332-merged.mount: Deactivated successfully.
Jan 22 13:38:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 22 13:38:07 compute-2 ceph-osd[79779]: osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:07.777+0000 7f47f8ed4640 -1 osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:08.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:08 compute-2 ceph-mon[77081]: 4.18 scrub starts
Jan 22 13:38:08 compute-2 ceph-mon[77081]: 4.18 scrub ok
Jan 22 13:38:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 74 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:08 compute-2 ceph-mon[77081]: 5.10 deep-scrub starts
Jan 22 13:38:08 compute-2 ceph-mon[77081]: 5.10 deep-scrub ok
Jan 22 13:38:08 compute-2 ceph-mon[77081]: pgmap v234: 305 pgs: 2 active+clean+scrubbing, 2 activating+remapped, 2 unknown, 2 active+clean+laggy, 297 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 11/206 objects misplaced (5.340%)
Jan 22 13:38:08 compute-2 podman[84326]: 2026-01-22 13:38:08.54996829 +0000 UTC m=+10.012485240 container remove 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:38:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:08 compute-2 systemd[1]: libpod-conmon-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope: Deactivated successfully.
Jan 22 13:38:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 22 13:38:08 compute-2 sudo[83734]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:08 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 84 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:08 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 84 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:08.788+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:08 compute-2 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:09 compute-2 sudo[86352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-2 sudo[86352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86352]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 22 13:38:09 compute-2 sudo[86382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86382]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-2 sudo[86425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86425]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph
Jan 22 13:38:09 compute-2 sudo[86476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86476]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-2 sudo[86528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86528]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:09 compute-2 sudo[86553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86553]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:09.782+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:09 compute-2 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:09 compute-2 python3.9[86525]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:38:09 compute-2 sudo[86578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-2 sudo[86578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86578]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:09 compute-2 sudo[86603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86603]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:09 compute-2 sudo[86648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86648]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:09 compute-2 sudo[86677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:09 compute-2 sudo[86677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:09 compute-2 sudo[86677]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sshd-session[86472]: Invalid user sol from 45.148.10.240 port 54414
Jan 22 13:38:10 compute-2 sudo[86725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[86725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[86725]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sudo[86771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:10 compute-2 sudo[86771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[86771]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sshd-session[86472]: Connection closed by invalid user sol 45.148.10.240 port 54414 [preauth]
Jan 22 13:38:10 compute-2 sudo[86826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[86826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[86826]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sudo[86875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new
Jan 22 13:38:10 compute-2 sudo[86875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[86875]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sudo[86913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[86913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[86913]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:10 compute-2 sudo[86966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 22 13:38:10 compute-2 sudo[86966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[86966]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sudo[87002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[87002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87002]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:10 compute-2 ceph-mon[77081]: osdmap e83: 3 total, 3 up, 3 in
Jan 22 13:38:10 compute-2 ceph-mon[77081]: pgmap v236: 305 pgs: 1 active+recovering+remapped, 2 unknown, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 9/205 objects misplaced (4.390%); 0 B/s, 0 objects/s recovering
Jan 22 13:38:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:10 compute-2 ceph-mon[77081]: osdmap e84: 3 total, 3 up, 3 in
Jan 22 13:38:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:10 compute-2 sudo[87027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:38:10 compute-2 sudo[87027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87027]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sudo[87052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[87052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87052]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 sudo[87077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config
Jan 22 13:38:10 compute-2 sudo[87077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87077]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 python3.9[86984]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:38:10 compute-2 sudo[87103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[87103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87103]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:10.829+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:10 compute-2 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:10 compute-2 sudo[87131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:10 compute-2 sudo[87131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87131]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:10 compute-2 sudo[87156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:10 compute-2 sudo[87156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:10 compute-2 sudo[87156]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:11 compute-2 sudo[87181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87181]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-2 sudo[87214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87214]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:11 compute-2 sudo[87255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87255]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-2 sudo[87303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87303]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:11 compute-2 sudo[87328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87328]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-2 sudo[87353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87353]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new
Jan 22 13:38:11 compute-2 sudo[87378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87378]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:11 compute-2 sudo[87403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87403]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 sudo[87428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-088fe176-0106-5401-803c-2da38b73b76a/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf.new /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:11 compute-2 sudo[87428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:11 compute-2 sudo[87428]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:11.796+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:11 compute-2 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:12 compute-2 python3.9[87578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:38:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 22 13:38:12 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 85 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=0/0 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 luod=0'0 crt=61'693 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:12 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 85 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=0/0 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 crt=61'693 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:12.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:12.823+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:12 compute-2 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:13 compute-2 sudo[87735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvkbfbzdcbdrdvgarckzjciqvjhbpwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089092.7229536-336-117577628120035/AnsiballZ_setup.py'
Jan 22 13:38:13 compute-2 sudo[87735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:13 compute-2 python3.9[87737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:38:13 compute-2 sudo[87735]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:13 compute-2 ceph-mon[77081]: 10.6 deep-scrub starts
Jan 22 13:38:13 compute-2 ceph-mon[77081]: 10.6 deep-scrub ok
Jan 22 13:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 13:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:38:13 compute-2 ceph-mon[77081]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 13:38:13 compute-2 ceph-mon[77081]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 13:38:13 compute-2 ceph-mon[77081]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 13:38:13 compute-2 ceph-mon[77081]: pgmap v238: 305 pgs: 1 active+recovering+remapped, 2 unknown, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 9/205 objects misplaced (4.390%); 0 B/s, 0 objects/s recovering
Jan 22 13:38:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 85 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=84/85 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:13.777+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:13 compute-2 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:13 compute-2 sudo[87819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsamldhszvebeunjobcnmpmsqfgdrkcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089092.7229536-336-117577628120035/AnsiballZ_dnf.py'
Jan 22 13:38:13 compute-2 sudo[87819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:38:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:14.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:14.750+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:14 compute-2 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:15 compute-2 python3.9[87821]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:38:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:15.743+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:15 compute-2 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 22 13:38:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-2 ceph-mon[77081]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:16 compute-2 ceph-mon[77081]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:16 compute-2 ceph-mon[77081]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 13:38:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-2 ceph-mon[77081]: pgmap v239: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 142 B/s wr, 20 op/s; 9/215 objects misplaced (4.186%); 30 B/s, 1 objects/s recovering
Jan 22 13:38:16 compute-2 ceph-mon[77081]: osdmap e85: 3 total, 3 up, 3 in
Jan 22 13:38:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-2 ceph-mon[77081]: 10.7 deep-scrub starts
Jan 22 13:38:16 compute-2 ceph-mon[77081]: 10.7 deep-scrub ok
Jan 22 13:38:16 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 79 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:16.700+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:16 compute-2 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:16 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 86 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=0/0 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:16 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 86 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=0/0 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:17 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 86 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=85/86 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 crt=61'693 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 22 13:38:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:17.730+0000 7f47f8ed4640 -1 osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:17 compute-2 ceph-osd[79779]: osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:18.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 22 13:38:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 22 13:38:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-2 ceph-mon[77081]: pgmap v241: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 307 B/s wr, 22 op/s; 9/215 objects misplaced (4.186%); 33 B/s, 1 objects/s recovering
Jan 22 13:38:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-2 ceph-mon[77081]: pgmap v242: 305 pgs: 1 active+recovering+remapped, 1 peering, 1 active+remapped, 1 active+recovery_wait+remapped, 2 active+clean+laggy, 299 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 18 op/s; 9/213 objects misplaced (4.225%); 27 B/s, 0 objects/s recovering
Jan 22 13:38:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-2 ceph-mon[77081]: 10.9 scrub starts
Jan 22 13:38:18 compute-2 ceph-mon[77081]: 10.9 scrub ok
Jan 22 13:38:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:18 compute-2 ceph-mon[77081]: osdmap e86: 3 total, 3 up, 3 in
Jan 22 13:38:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:18.762+0000 7f47f8ed4640 -1 osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:18 compute-2 ceph-osd[79779]: osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:18 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 87 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:18 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 87 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:19 compute-2 ceph-osd[79779]: osd.2 87 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:19.794+0000 7f47f8ed4640 -1 osd.2 87 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 22 13:38:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 22 13:38:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 8.1d scrub starts
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 8.1d scrub ok
Jan 22 13:38:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:20 compute-2 ceph-mon[77081]: pgmap v244: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 2 active+clean+laggy, 300 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 18 op/s; 3/213 objects misplaced (1.408%); 27 B/s, 1 objects/s recovering
Jan 22 13:38:20 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 10.4 scrub starts
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 10.a scrub starts
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 10.a scrub ok
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 8.1e scrub starts
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 8.1e scrub ok
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 10.4 scrub ok
Jan 22 13:38:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:20 compute-2 ceph-mon[77081]: osdmap e87: 3 total, 3 up, 3 in
Jan 22 13:38:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:38:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:38:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 22 13:38:20 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 88 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=87/88 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:20 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 88 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:20.831+0000 7f47f8ed4640 -1 osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:20 compute-2 ceph-osd[79779]: osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:21 compute-2 ceph-mon[77081]: 10.b scrub starts
Jan 22 13:38:21 compute-2 ceph-mon[77081]: 10.b scrub ok
Jan 22 13:38:21 compute-2 ceph-mon[77081]: pgmap v246: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 1 peering, 2 active+clean+laggy, 300 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 3/214 objects misplaced (1.402%); 0 B/s, 0 objects/s recovering
Jan 22 13:38:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:21 compute-2 ceph-mon[77081]: 8.1c scrub starts
Jan 22 13:38:21 compute-2 ceph-mon[77081]: 8.1c scrub ok
Jan 22 13:38:21 compute-2 ceph-mon[77081]: osdmap e88: 3 total, 3 up, 3 in
Jan 22 13:38:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:21.875+0000 7f47f8ed4640 -1 osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:21 compute-2 ceph-osd[79779]: osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 22 13:38:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:22.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:22 compute-2 ceph-mon[77081]: 10.c deep-scrub starts
Jan 22 13:38:22 compute-2 ceph-mon[77081]: 10.c deep-scrub ok
Jan 22 13:38:22 compute-2 ceph-mon[77081]: 9.2 scrub starts
Jan 22 13:38:22 compute-2 ceph-mon[77081]: 9.2 scrub ok
Jan 22 13:38:22 compute-2 ceph-mon[77081]: pgmap v248: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 22 13:38:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 13:38:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 13:38:22 compute-2 ceph-mon[77081]: osdmap e89: 3 total, 3 up, 3 in
Jan 22 13:38:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 22 13:38:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:22.897+0000 7f47f8ed4640 -1 osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:22 compute-2 ceph-osd[79779]: osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:22.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:23.945+0000 7f47f8ed4640 -1 osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:23 compute-2 ceph-osd[79779]: osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:24 compute-2 ceph-mon[77081]: 9.4 deep-scrub starts
Jan 22 13:38:24 compute-2 ceph-mon[77081]: 10.d scrub starts
Jan 22 13:38:24 compute-2 ceph-mon[77081]: 10.d scrub ok
Jan 22 13:38:24 compute-2 ceph-mon[77081]: 9.4 deep-scrub ok
Jan 22 13:38:24 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:24 compute-2 ceph-mon[77081]: osdmap e90: 3 total, 3 up, 3 in
Jan 22 13:38:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 13:38:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 22 13:38:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:38:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:38:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:24.981+0000 7f47f8ed4640 -1 osd.2 91 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:24 compute-2 ceph-osd[79779]: osd.2 91 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:25 compute-2 ceph-mon[77081]: 10.e scrub starts
Jan 22 13:38:25 compute-2 ceph-mon[77081]: 10.e scrub ok
Jan 22 13:38:25 compute-2 ceph-mon[77081]: pgmap v251: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 68 B/s, 2 objects/s recovering
Jan 22 13:38:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 13:38:25 compute-2 ceph-mon[77081]: osdmap e91: 3 total, 3 up, 3 in
Jan 22 13:38:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 22 13:38:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:25.993+0000 7f47f8ed4640 -1 osd.2 92 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:25 compute-2 ceph-osd[79779]: osd.2 92 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.16 deep-scrub starts
Jan 22 13:38:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.16 deep-scrub ok
Jan 22 13:38:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 22 13:38:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:26 compute-2 ceph-mon[77081]: 10.16 scrub starts
Jan 22 13:38:26 compute-2 ceph-mon[77081]: 10.16 scrub ok
Jan 22 13:38:26 compute-2 ceph-mon[77081]: osdmap e92: 3 total, 3 up, 3 in
Jan 22 13:38:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 13:38:26 compute-2 ceph-mon[77081]: 8.16 deep-scrub starts
Jan 22 13:38:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:26 compute-2 ceph-mon[77081]: 8.16 deep-scrub ok
Jan 22 13:38:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:26 compute-2 sudo[87897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:26 compute-2 sudo[87897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:26 compute-2 sudo[87897]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:26 compute-2 sudo[87922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:38:26 compute-2 sudo[87922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:26 compute-2 sudo[87922]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:26 compute-2 sudo[87947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:26 compute-2 sudo[87947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:26 compute-2 sudo[87947]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:26 compute-2 sudo[87972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:26 compute-2 sudo[87972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:26 compute-2 sudo[87972]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:26.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:27.005+0000 7f47f8ed4640 -1 osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:27 compute-2 ceph-osd[79779]: osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:27 compute-2 ceph-mon[77081]: pgmap v254: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:27 compute-2 ceph-mon[77081]: 10.17 scrub starts
Jan 22 13:38:27 compute-2 ceph-mon[77081]: 10.17 scrub ok
Jan 22 13:38:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 13:38:27 compute-2 ceph-mon[77081]: osdmap e93: 3 total, 3 up, 3 in
Jan 22 13:38:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:38:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:38:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:27.964+0000 7f47f8ed4640 -1 osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:27 compute-2 ceph-osd[79779]: osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 22 13:38:28 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.979992867s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'705 mlcod 0'0 active pruub 127.301765442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:28 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.979891777s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 127.301765442s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:28 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.984266281s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'695 mlcod 0'0 active pruub 127.307731628s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:28 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.984044075s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 127.307731628s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:28 compute-2 ceph-mon[77081]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 13:38:28 compute-2 ceph-mon[77081]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 13:38:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:28.948+0000 7f47f8ed4640 -1 osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:28 compute-2 ceph-osd[79779]: osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:29.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:29.992+0000 7f47f8ed4640 -1 osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:29 compute-2 ceph-osd[79779]: osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:30.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:30 compute-2 ceph-mon[77081]: pgmap v256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:30 compute-2 ceph-mon[77081]: Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 13:38:30 compute-2 ceph-mon[77081]: Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 13:38:30 compute-2 ceph-mon[77081]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 13:38:30 compute-2 ceph-mon[77081]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 13:38:30 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:30 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 13:38:30 compute-2 ceph-mon[77081]: osdmap e94: 3 total, 3 up, 3 in
Jan 22 13:38:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:30 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 13:38:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:31.003+0000 7f47f8ed4640 -1 osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:31 compute-2 ceph-osd[79779]: osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 22 13:38:31 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'705 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:31 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'705 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:31 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:31 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:32.034+0000 7f47f8ed4640 -1 osd.2 95 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:32 compute-2 ceph-osd[79779]: osd.2 95 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:32 compute-2 ceph-mon[77081]: pgmap v258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 2 objects/s recovering
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 9.c scrub starts
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 9.c scrub ok
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:32 compute-2 ceph-mon[77081]: osdmap e95: 3 total, 3 up, 3 in
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:32 compute-2 ceph-mon[77081]: Reconfiguring osd.0 (monmap changed)...
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:32 compute-2 ceph-mon[77081]: Reconfiguring daemon osd.0 on compute-0
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 13:38:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 22 13:38:32 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.439438820s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'695 mlcod 0'0 active pruub 133.065887451s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:32 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.439373016s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 133.065887451s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:32 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.438467979s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'704 mlcod 0'0 active pruub 133.065811157s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:32 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.438288689s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'704 mlcod 0'0 unknown NOTIFY pruub 133.065811157s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:32 compute-2 ceph-mon[77081]: pgmap v260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 2 objects/s recovering
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 10.1a deep-scrub starts
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 10.1a deep-scrub ok
Jan 22 13:38:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 13:38:32 compute-2 ceph-mon[77081]: osdmap e96: 3 total, 3 up, 3 in
Jan 22 13:38:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:33.070+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:33 compute-2 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:33.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:34.090+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:34 compute-2 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 22 13:38:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 22 13:38:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:35.112+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:35 compute-2 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:35.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:36.094+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'704 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'704 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:38:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:36 compute-2 ceph-mon[77081]: 9.10 scrub starts
Jan 22 13:38:36 compute-2 ceph-mon[77081]: 9.10 scrub ok
Jan 22 13:38:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:36 compute-2 ceph-mon[77081]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 13:38:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 13:38:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:36 compute-2 ceph-mon[77081]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=95/97 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] async=[1] r=0 lpr=95 pi=[72,95)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:36 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=95/97 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] async=[1] r=0 lpr=95 pi=[72,95)/1 crt=62'705 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:37.081+0000 7f47f8ed4640 -1 osd.2 97 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 97 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 22 13:38:37 compute-2 ceph-mon[77081]: pgmap v262: 305 pgs: 2 unknown, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 10.3 scrub starts
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 9.11 scrub starts
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 10.3 scrub ok
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 9.11 scrub ok
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 10.1c scrub starts
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 10.1c scrub ok
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-2 ceph-mon[77081]: pgmap v263: 305 pgs: 2 unknown, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 10.1d scrub starts
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 10.1d scrub ok
Jan 22 13:38:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:37 compute-2 ceph-mon[77081]: osdmap e97: 3 total, 3 up, 3 in
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-2 ceph-mon[77081]: Reconfiguring osd.1 (monmap changed)...
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:37 compute-2 ceph-mon[77081]: Reconfiguring daemon osd.1 on compute-1
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=95/97 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.930242538s) [1] async=[1] r=-1 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 62'705 active pruub 141.042510986s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=95/97 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.930095673s) [1] r=-1 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 141.042510986s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=95/97 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.923345566s) [1] async=[1] r=-1 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 62'695 active pruub 141.037170410s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=95/97 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.923262596s) [1] r=-1 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 141.037170410s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=97/98 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] async=[1] r=0 lpr=97 pi=[70,97)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=97/98 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] async=[1] r=0 lpr=97 pi=[70,97)/1 crt=62'704 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:38:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:37.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=97/98 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.695192337s) [1] async=[1] r=-1 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 62'704 active pruub 142.123626709s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.694999695s) [1] async=[1] r=-1 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 62'695 active pruub 142.123489380s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=97/98 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.695041656s) [1] r=-1 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 0'0 unknown NOTIFY pruub 142.123626709s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:37 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.694853783s) [1] r=-1 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 142.123489380s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:38:37 compute-2 sudo[88029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:37 compute-2 sudo[88029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-2 sudo[88029]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-2 sudo[88054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:38:37 compute-2 sudo[88054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-2 sudo[88054]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:37 compute-2 sudo[88083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:37 compute-2 sudo[88083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:37 compute-2 sudo[88083]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-2 sudo[88110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 13:38:38 compute-2 sudo[88110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-2 ceph-osd[79779]: osd.2 99 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:38.077+0000 7f47f8ed4640 -1 osd.2 99 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:38 compute-2 ceph-mon[77081]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 13:38:38 compute-2 ceph-mon[77081]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 13:38:38 compute-2 ceph-mon[77081]: osdmap e98: 3 total, 3 up, 3 in
Jan 22 13:38:38 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:38 compute-2 ceph-mon[77081]: osdmap e99: 3 total, 3 up, 3 in
Jan 22 13:38:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 13:38:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 22 13:38:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.328139952 +0000 UTC m=+0.060140082 container create 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 13:38:38 compute-2 systemd[72610]: Created slice User Background Tasks Slice.
Jan 22 13:38:38 compute-2 systemd[72610]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 13:38:38 compute-2 systemd[1]: Started libpod-conmon-5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2.scope.
Jan 22 13:38:38 compute-2 systemd[72610]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.294961088 +0000 UTC m=+0.026961228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 13:38:38 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.414216623 +0000 UTC m=+0.146216763 container init 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.424264604 +0000 UTC m=+0.156264754 container start 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.428132968 +0000 UTC m=+0.160133098 container attach 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 13:38:38 compute-2 lucid_goodall[88175]: 167 167
Jan 22 13:38:38 compute-2 systemd[1]: libpod-5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2.scope: Deactivated successfully.
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.431756926 +0000 UTC m=+0.163757056 container died 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:38:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-51cf3b20bd3f08c12ca9ec3d431bee0bcc4aa1b871af0f8879d943dd97fb9da3-merged.mount: Deactivated successfully.
Jan 22 13:38:38 compute-2 podman[88157]: 2026-01-22 13:38:38.485383902 +0000 UTC m=+0.217384022 container remove 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 13:38:38 compute-2 systemd[1]: libpod-conmon-5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2.scope: Deactivated successfully.
Jan 22 13:38:38 compute-2 sudo[88110]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 22 13:38:38 compute-2 sudo[88201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:38 compute-2 sudo[88201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-2 sudo[88201]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-2 sudo[88226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:38:38 compute-2 sudo[88226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-2 sudo[88226]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-2 sudo[88251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:38 compute-2 sudo[88251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:38 compute-2 sudo[88251]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:38 compute-2 sudo[88276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:38:38 compute-2 sudo[88276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:39 compute-2 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.11 deep-scrub starts
Jan 22 13:38:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:39.110+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.11 deep-scrub ok
Jan 22 13:38:39 compute-2 ceph-mon[77081]: pgmap v266: 305 pgs: 2 remapped+peering, 2 unknown, 2 active+clean+laggy, 299 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:39 compute-2 ceph-mon[77081]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 13:38:39 compute-2 ceph-mon[77081]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 13:38:39 compute-2 ceph-mon[77081]: 9.14 scrub starts
Jan 22 13:38:39 compute-2 ceph-mon[77081]: 9.14 scrub ok
Jan 22 13:38:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:39 compute-2 ceph-mon[77081]: osdmap e100: 3 total, 3 up, 3 in
Jan 22 13:38:39 compute-2 ceph-mon[77081]: 10.11 deep-scrub starts
Jan 22 13:38:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:39 compute-2 ceph-mon[77081]: 10.11 deep-scrub ok
Jan 22 13:38:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:39 compute-2 podman[88373]: 2026-01-22 13:38:39.555612496 +0000 UTC m=+0.204700360 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 22 13:38:40 compute-2 podman[88373]: 2026-01-22 13:38:40.024368124 +0000 UTC m=+0.673455968 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 13:38:40 compute-2 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:40.133+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:40 compute-2 ceph-mon[77081]: pgmap v269: 305 pgs: 2 peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s; 137 B/s, 5 objects/s recovering
Jan 22 13:38:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:40 compute-2 ceph-mon[77081]: 9.1c scrub starts
Jan 22 13:38:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:40 compute-2 ceph-mon[77081]: 9.1c scrub ok
Jan 22 13:38:41 compute-2 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:41.089+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:41 compute-2 podman[88527]: 2026-01-22 13:38:41.141552863 +0000 UTC m=+0.071719544 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:38:41 compute-2 podman[88527]: 2026-01-22 13:38:41.151544403 +0000 UTC m=+0.081711064 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:38:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:41.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:41 compute-2 podman[88589]: 2026-01-22 13:38:41.774450176 +0000 UTC m=+0.078378494 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, name=keepalived)
Jan 22 13:38:41 compute-2 podman[88589]: 2026-01-22 13:38:41.788697461 +0000 UTC m=+0.092625779 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 13:38:41 compute-2 sudo[88276]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:42 compute-2 ceph-mon[77081]: 11.2 scrub starts
Jan 22 13:38:42 compute-2 ceph-mon[77081]: 11.2 scrub ok
Jan 22 13:38:42 compute-2 ceph-mon[77081]: 10.1f scrub starts
Jan 22 13:38:42 compute-2 ceph-mon[77081]: 10.1f scrub ok
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:38:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:38:42 compute-2 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:42.068+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:43 compute-2 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:43.030+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:43 compute-2 ceph-mon[77081]: pgmap v270: 305 pgs: 1 active+clean+scrubbing, 2 peering, 2 active+clean+laggy, 300 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 4 objects/s recovering
Jan 22 13:38:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:43.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:43 compute-2 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:43.985+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:44.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:44 compute-2 ceph-mon[77081]: 11.1e scrub starts
Jan 22 13:38:44 compute-2 ceph-mon[77081]: 11.1e scrub ok
Jan 22 13:38:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 13:38:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 22 13:38:45 compute-2 ceph-osd[79779]: osd.2 101 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.a deep-scrub starts
Jan 22 13:38:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:45.001+0000 7f47f8ed4640 -1 osd.2 101 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.a deep-scrub ok
Jan 22 13:38:45 compute-2 ceph-mon[77081]: pgmap v271: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 123 B/s, 5 objects/s recovering
Jan 22 13:38:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 13:38:45 compute-2 ceph-mon[77081]: osdmap e101: 3 total, 3 up, 3 in
Jan 22 13:38:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:45 compute-2 ceph-mon[77081]: 11.a deep-scrub starts
Jan 22 13:38:45 compute-2 ceph-mon[77081]: 11.a deep-scrub ok
Jan 22 13:38:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 22 13:38:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:46 compute-2 ceph-osd[79779]: osd.2 102 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:46.012+0000 7f47f8ed4640 -1 osd.2 102 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:46 compute-2 ceph-mon[77081]: 11.6 scrub starts
Jan 22 13:38:46 compute-2 ceph-mon[77081]: 11.6 scrub ok
Jan 22 13:38:46 compute-2 ceph-mon[77081]: pgmap v273: 305 pgs: 1 active+clean+scrubbing, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 97 B/s, 4 objects/s recovering
Jan 22 13:38:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 13:38:46 compute-2 ceph-mon[77081]: osdmap e102: 3 total, 3 up, 3 in
Jan 22 13:38:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 22 13:38:46 compute-2 sudo[88626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:46 compute-2 sudo[88626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:46 compute-2 sudo[88626]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:46 compute-2 sudo[88651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:46 compute-2 sudo[88651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:46 compute-2 sudo[88651]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:47 compute-2 ceph-osd[79779]: osd.2 103 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:47.030+0000 7f47f8ed4640 -1 osd.2 103 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 22 13:38:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 22 13:38:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 11.9 scrub starts
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 11.9 scrub ok
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 11.1d scrub starts
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 11.1d scrub ok
Jan 22 13:38:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 13:38:47 compute-2 ceph-mon[77081]: osdmap e103: 3 total, 3 up, 3 in
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 10.10 scrub starts
Jan 22 13:38:47 compute-2 ceph-mon[77081]: 10.10 scrub ok
Jan 22 13:38:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 22 13:38:48 compute-2 ceph-osd[79779]: osd.2 104 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:48.068+0000 7f47f8ed4640 -1 osd.2 104 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 22 13:38:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 22 13:38:48 compute-2 ceph-mon[77081]: pgmap v276: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 22 13:38:48 compute-2 ceph-mon[77081]: osdmap e104: 3 total, 3 up, 3 in
Jan 22 13:38:48 compute-2 ceph-mon[77081]: 10.f scrub starts
Jan 22 13:38:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:48 compute-2 ceph-mon[77081]: 10.f scrub ok
Jan 22 13:38:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:38:48 compute-2 ceph-mon[77081]: osdmap e105: 3 total, 3 up, 3 in
Jan 22 13:38:48 compute-2 sudo[88684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:38:48 compute-2 sudo[88684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:48 compute-2 sudo[88684]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 22 13:38:49 compute-2 ceph-osd[79779]: osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:49.021+0000 7f47f8ed4640 -1 osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:49 compute-2 sudo[88709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:38:49 compute-2 sudo[88709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:38:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 22 13:38:49 compute-2 sudo[88709]: pam_unix(sudo:session): session closed for user root
Jan 22 13:38:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:50 compute-2 ceph-osd[79779]: osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:50.018+0000 7f47f8ed4640 -1 osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:50.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:51 compute-2 ceph-osd[79779]: osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:51.034+0000 7f47f8ed4640 -1 osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:51 compute-2 ceph-mon[77081]: 10.12 scrub starts
Jan 22 13:38:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:51 compute-2 ceph-mon[77081]: 10.12 scrub ok
Jan 22 13:38:51 compute-2 ceph-mon[77081]: 11.b scrub starts
Jan 22 13:38:51 compute-2 ceph-mon[77081]: 11.b scrub ok
Jan 22 13:38:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 22 13:38:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 22 13:38:52 compute-2 ceph-osd[79779]: osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:52.016+0000 7f47f8ed4640 -1 osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 22 13:38:52 compute-2 ceph-mon[77081]: pgmap v279: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:38:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:52 compute-2 ceph-mon[77081]: 8.1b scrub starts
Jan 22 13:38:52 compute-2 ceph-mon[77081]: 8.1b scrub ok
Jan 22 13:38:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:52 compute-2 ceph-mon[77081]: 11.1 scrub starts
Jan 22 13:38:52 compute-2 ceph-mon[77081]: 11.1 scrub ok
Jan 22 13:38:52 compute-2 ceph-mon[77081]: osdmap e106: 3 total, 3 up, 3 in
Jan 22 13:38:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 22 13:38:52 compute-2 ceph-osd[79779]: osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:52.985+0000 7f47f8ed4640 -1 osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:53 compute-2 ceph-mon[77081]: pgmap v280: 305 pgs: 1 active+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 13:38:53 compute-2 ceph-mon[77081]: 8.9 scrub starts
Jan 22 13:38:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:53 compute-2 ceph-mon[77081]: 8.9 scrub ok
Jan 22 13:38:53 compute-2 ceph-mon[77081]: 8.8 scrub starts
Jan 22 13:38:53 compute-2 ceph-mon[77081]: 8.8 scrub ok
Jan 22 13:38:53 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:53 compute-2 ceph-mon[77081]: osdmap e107: 3 total, 3 up, 3 in
Jan 22 13:38:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:53 compute-2 ceph-osd[79779]: osd.2 107 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 22 13:38:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:53.972+0000 7f47f8ed4640 -1 osd.2 107 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 22 13:38:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 13:38:54 compute-2 ceph-mon[77081]: 10.1 scrub starts
Jan 22 13:38:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:54 compute-2 ceph-mon[77081]: 10.1 scrub ok
Jan 22 13:38:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 22 13:38:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:54.997+0000 7f47f8ed4640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:54 compute-2 ceph-osd[79779]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 22 13:38:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 22 13:38:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:55 compute-2 ceph-mon[77081]: pgmap v283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 22 13:38:55 compute-2 ceph-mon[77081]: 11.c scrub starts
Jan 22 13:38:55 compute-2 ceph-mon[77081]: 11.c scrub ok
Jan 22 13:38:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 13:38:55 compute-2 ceph-mon[77081]: osdmap e108: 3 total, 3 up, 3 in
Jan 22 13:38:55 compute-2 ceph-mon[77081]: 11.8 scrub starts
Jan 22 13:38:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:55 compute-2 ceph-mon[77081]: 11.8 scrub ok
Jan 22 13:38:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 22 13:38:56 compute-2 ceph-osd[79779]: osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:56.012+0000 7f47f8ed4640 -1 osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:38:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:38:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:38:56 compute-2 ceph-mon[77081]: 8.14 deep-scrub starts
Jan 22 13:38:56 compute-2 ceph-mon[77081]: 8.14 deep-scrub ok
Jan 22 13:38:56 compute-2 ceph-mon[77081]: pgmap v285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 22 13:38:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 13:38:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 13:38:56 compute-2 ceph-mon[77081]: osdmap e109: 3 total, 3 up, 3 in
Jan 22 13:38:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:56 compute-2 ceph-mon[77081]: 11.d scrub starts
Jan 22 13:38:56 compute-2 ceph-mon[77081]: 11.d scrub ok
Jan 22 13:38:56 compute-2 ceph-osd[79779]: osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:56.968+0000 7f47f8ed4640 -1 osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:57 compute-2 ceph-osd[79779]: osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:57.948+0000 7f47f8ed4640 -1 osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 22 13:38:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:58 compute-2 ceph-mon[77081]: 8.10 scrub starts
Jan 22 13:38:58 compute-2 ceph-mon[77081]: 8.10 scrub ok
Jan 22 13:38:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 13:38:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:58.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:58 compute-2 ceph-osd[79779]: osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:58.966+0000 7f47f8ed4640 -1 osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:59 compute-2 ceph-mon[77081]: pgmap v287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:38:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:59 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:38:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 13:38:59 compute-2 ceph-mon[77081]: osdmap e110: 3 total, 3 up, 3 in
Jan 22 13:38:59 compute-2 ceph-mon[77081]: 11.5 scrub starts
Jan 22 13:38:59 compute-2 ceph-mon[77081]: 11.5 scrub ok
Jan 22 13:38:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:38:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:38:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:38:59 compute-2 ceph-osd[79779]: osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:38:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:38:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:59.944+0000 7f47f8ed4640 -1 osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 22 13:39:00 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 111 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=111 pruub=10.250331879s) [1] r=-1 lpr=111 pi=[72,111)/1 crt=62'690 mlcod 0'0 active pruub 159.308609009s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:00 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 111 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=111 pruub=10.250194550s) [1] r=-1 lpr=111 pi=[72,111)/1 crt=62'690 mlcod 0'0 unknown NOTIFY pruub 159.308609009s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:00 compute-2 ceph-mon[77081]: 11.10 scrub starts
Jan 22 13:39:00 compute-2 ceph-mon[77081]: 11.10 scrub ok
Jan 22 13:39:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 13:39:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:00.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:00 compute-2 ceph-osd[79779]: osd.2 111 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:00.951+0000 7f47f8ed4640 -1 osd.2 111 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 22 13:39:01 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 112 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=0 lpr=112 pi=[72,112)/1 crt=62'690 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:01 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 112 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=0 lpr=112 pi=[72,112)/1 crt=62'690 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:01 compute-2 ceph-mon[77081]: pgmap v289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 13:39:01 compute-2 ceph-mon[77081]: osdmap e111: 3 total, 3 up, 3 in
Jan 22 13:39:01 compute-2 ceph-mon[77081]: osdmap e112: 3 total, 3 up, 3 in
Jan 22 13:39:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:01 compute-2 ceph-osd[79779]: osd.2 112 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:01.922+0000 7f47f8ed4640 -1 osd.2 112 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 22 13:39:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:02 compute-2 ceph-mon[77081]: 11.11 scrub starts
Jan 22 13:39:02 compute-2 ceph-mon[77081]: 11.11 scrub ok
Jan 22 13:39:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 13:39:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:02.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:02 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 113 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=113) [2] r=0 lpr=113 pi=[77,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:02 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 113 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=112/113 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[72,112)/1 crt=62'690 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 22 13:39:02 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=-1 lpr=114 pi=[77,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:02 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=-1 lpr=114 pi=[77,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:02 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:02.886+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:03 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:03.922+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:04 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:04.919+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:39:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:39:05 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:05.913+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:06.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:06 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:06.883+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 22 13:39:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 22 13:39:06 compute-2 sudo[88763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:07 compute-2 sudo[88763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:07 compute-2 sudo[88763]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:07 compute-2 sudo[88788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:07 compute-2 sudo[88788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:07 compute-2 sudo[88788]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:07 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:07.889+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:08.848+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:08 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:09.870+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:09 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:10.827+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:10 compute-2 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:11 compute-2 ceph-mon[77081]: pgmap v292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:11 compute-2 ceph-mon[77081]: 11.7 scrub starts
Jan 22 13:39:11 compute-2 ceph-mon[77081]: 11.7 scrub ok
Jan 22 13:39:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 13:39:11 compute-2 ceph-mon[77081]: osdmap e113: 3 total, 3 up, 3 in
Jan 22 13:39:11 compute-2 ceph-mon[77081]: osdmap e114: 3 total, 3 up, 3 in
Jan 22 13:39:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2[77077]: 2026-01-22T13:39:11.216+0000 7f661ae92640 -1 mon.compute-2@1(peon).paxos(paxos updating c 1..711) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.505236149s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 22 13:39:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).paxos(paxos updating c 1..711) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.505236149s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 22 13:39:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 22 13:39:11 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=112/113 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115 pruub=15.244450569s) [1] async=[1] r=-1 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 62'690 active pruub 175.391098022s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:11 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=112/113 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115 pruub=15.244197845s) [1] r=-1 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 0'0 unknown NOTIFY pruub 175.391098022s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:11 compute-2 ceph-mon[77081]: pgmap v295: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:11 compute-2 ceph-mon[77081]: 11.15 scrub starts
Jan 22 13:39:11 compute-2 ceph-mon[77081]: 11.15 scrub ok
Jan 22 13:39:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:11.789+0000 7f47f8ed4640 -1 osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:11 compute-2 ceph-osd[79779]: osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:11.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:12.780+0000 7f47f8ed4640 -1 osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:12 compute-2 ceph-osd[79779]: osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 22 13:39:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:13 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: pgmap v296: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 8.19 scrub starts
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 8.19 scrub ok
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 8.6 scrub starts
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 8.6 scrub ok
Jan 22 13:39:13 compute-2 ceph-mon[77081]: pgmap v297: 305 pgs: 1 active+remapped, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Jan 22 13:39:13 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 11.18 deep-scrub starts
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 11.18 deep-scrub ok
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: pgmap v298: 305 pgs: 1 remapped+peering, 1 active+remapped, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-mon[77081]: osdmap e115: 3 total, 3 up, 3 in
Jan 22 13:39:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:13 compute-2 ceph-osd[79779]: osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:13.797+0000 7f47f8ed4640 -1 osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Jan 22 13:39:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Jan 22 13:39:13 compute-2 sudo[87819]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:14 compute-2 sudo[88966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieumonueupzqugxhjdvcleziqafmacma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089154.1450741-372-16998893950001/AnsiballZ_command.py'
Jan 22 13:39:14 compute-2 sudo[88966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:14 compute-2 ceph-mon[77081]: pgmap v300: 305 pgs: 1 active+remapped, 1 remapped+peering, 2 active+clean+laggy, 301 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:14 compute-2 ceph-mon[77081]: osdmap e116: 3 total, 3 up, 3 in
Jan 22 13:39:14 compute-2 ceph-mon[77081]: 11.4 scrub starts
Jan 22 13:39:14 compute-2 ceph-mon[77081]: 11.4 scrub ok
Jan 22 13:39:14 compute-2 python3.9[88968]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:39:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:14.832+0000 7f47f8ed4640 -1 osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:14 compute-2 ceph-osd[79779]: osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 22 13:39:15 compute-2 sudo[88966]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:15 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 117 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=116/117 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:15 compute-2 ceph-mon[77081]: pgmap v302: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:15 compute-2 ceph-mon[77081]: 8.1f deep-scrub starts
Jan 22 13:39:15 compute-2 ceph-mon[77081]: 8.1f deep-scrub ok
Jan 22 13:39:15 compute-2 ceph-mon[77081]: 11.f scrub starts
Jan 22 13:39:15 compute-2 ceph-mon[77081]: 11.f scrub ok
Jan 22 13:39:15 compute-2 ceph-mon[77081]: osdmap e117: 3 total, 3 up, 3 in
Jan 22 13:39:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:15.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:15.799+0000 7f47f8ed4640 -1 osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:15 compute-2 ceph-osd[79779]: osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:16 compute-2 sudo[89253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuuctvgbjqvpiigxszpqowmycuvysork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089155.6997116-396-185531413398995/AnsiballZ_selinux.py'
Jan 22 13:39:16 compute-2 sudo[89253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:16 compute-2 python3.9[89255]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 13:39:16 compute-2 sudo[89253]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:16.765+0000 7f47f8ed4640 -1 osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:16 compute-2 ceph-osd[79779]: osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:16 compute-2 ceph-mon[77081]: pgmap v304: 305 pgs: 1 remapped+peering, 2 active+clean+laggy, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:17 compute-2 sudo[89406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hskcvcgjpcbjcxbdxypkcakhbuqfpfnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089157.1837037-429-15189843701134/AnsiballZ_command.py'
Jan 22 13:39:17 compute-2 sudo[89406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:17 compute-2 python3.9[89408]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 13:39:17 compute-2 sudo[89406]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:17.748+0000 7f47f8ed4640 -1 osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:17 compute-2 ceph-osd[79779]: osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 22 13:39:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 22 13:39:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 13:39:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 13:39:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 22 13:39:17 compute-2 ceph-mon[77081]: 11.1c scrub starts
Jan 22 13:39:17 compute-2 ceph-mon[77081]: 11.1c scrub ok
Jan 22 13:39:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:17 compute-2 ceph-mon[77081]: 11.1f scrub starts
Jan 22 13:39:17 compute-2 ceph-mon[77081]: 11.1f scrub ok
Jan 22 13:39:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 13:39:17 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:18 compute-2 sudo[89558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noioyvcppvdwkzqyoegweiocslinbban ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089157.9968908-453-213663628558363/AnsiballZ_file.py'
Jan 22 13:39:18 compute-2 sudo[89558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:18 compute-2 python3.9[89560]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:39:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:18 compute-2 sudo[89558]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:18.770+0000 7f47f8ed4640 -1 osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:18 compute-2 ceph-osd[79779]: osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 22 13:39:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 22 13:39:19 compute-2 sudo[89711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzoggmbonhmdfwjyvsbecrjrygkmglpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089158.712368-477-21211659869489/AnsiballZ_mount.py'
Jan 22 13:39:19 compute-2 sudo[89711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:19 compute-2 ceph-mon[77081]: pgmap v305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Jan 22 13:39:19 compute-2 ceph-mon[77081]: 11.3 scrub starts
Jan 22 13:39:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:19 compute-2 ceph-mon[77081]: 11.3 scrub ok
Jan 22 13:39:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 13:39:19 compute-2 ceph-mon[77081]: osdmap e118: 3 total, 3 up, 3 in
Jan 22 13:39:19 compute-2 ceph-mon[77081]: 10.14 scrub starts
Jan 22 13:39:19 compute-2 ceph-mon[77081]: 10.14 scrub ok
Jan 22 13:39:19 compute-2 python3.9[89713]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 13:39:19 compute-2 sudo[89711]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:19.727+0000 7f47f8ed4640 -1 osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:19 compute-2 ceph-osd[79779]: osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 22 13:39:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 22 13:39:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:20.772+0000 7f47f8ed4640 -1 osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:20 compute-2 ceph-osd[79779]: osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:20 compute-2 sudo[89864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjtkdbiqxepuhxcvoxzntcwqxeubuzee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089160.6260915-561-37122588794286/AnsiballZ_file.py'
Jan 22 13:39:20 compute-2 sudo[89864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 8.11 scrub starts
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 8.11 scrub ok
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 8.12 scrub starts
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 8.12 scrub ok
Jan 22 13:39:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 11.19 scrub starts
Jan 22 13:39:20 compute-2 ceph-mon[77081]: 11.19 scrub ok
Jan 22 13:39:21 compute-2 python3.9[89866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:21 compute-2 sudo[89864]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:21 compute-2 sudo[90016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkgcviiccqrrkzjsffiqoyktijtjqptu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089161.5217586-586-68103122315322/AnsiballZ_stat.py'
Jan 22 13:39:21 compute-2 sudo[90016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:21.810+0000 7f47f8ed4640 -1 osd.2 119 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:21 compute-2 ceph-osd[79779]: osd.2 119 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 22 13:39:22 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 120 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=120 pruub=10.775048256s) [0] r=-1 lpr=120 pi=[86,120)/1 crt=62'705 mlcod 0'0 active pruub 181.590789795s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:22 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 120 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=120 pruub=10.773756981s) [0] r=-1 lpr=120 pi=[86,120)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 181.590789795s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:22 compute-2 python3.9[90018]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:39:22 compute-2 ceph-mon[77081]: pgmap v307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Jan 22 13:39:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 13:39:22 compute-2 ceph-mon[77081]: osdmap e119: 3 total, 3 up, 3 in
Jan 22 13:39:22 compute-2 ceph-mon[77081]: 11.12 scrub starts
Jan 22 13:39:22 compute-2 ceph-mon[77081]: 11.12 scrub ok
Jan 22 13:39:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 13:39:22 compute-2 sudo[90016]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:22 compute-2 sudo[90094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrafrzfdbbnjzigksxxedgtgyrdpkux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089161.5217586-586-68103122315322/AnsiballZ_file.py'
Jan 22 13:39:22 compute-2 sudo[90094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:22 compute-2 python3.9[90097]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:39:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:22.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:22 compute-2 sudo[90094]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 22 13:39:22 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 121 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] r=0 lpr=121 pi=[86,121)/1 crt=62'705 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:22 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 121 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] r=0 lpr=121 pi=[86,121)/1 crt=62'705 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:22.835+0000 7f47f8ed4640 -1 osd.2 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:22 compute-2 ceph-osd[79779]: osd.2 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Jan 22 13:39:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Jan 22 13:39:23 compute-2 ceph-mon[77081]: pgmap v309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 13:39:23 compute-2 ceph-mon[77081]: osdmap e120: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:23 compute-2 ceph-mon[77081]: osdmap e121: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 22 13:39:23 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 122 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=121/122 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] async=[0] r=0 lpr=121 pi=[86,121)/1 crt=62'705 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:23 compute-2 sudo[90247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqdaxfdstonvmmvnctbyqxqpprmbbpxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089163.552187-648-160646627315201/AnsiballZ_stat.py'
Jan 22 13:39:23 compute-2 sudo[90247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:23.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:23.874+0000 7f47f8ed4640 -1 osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:23 compute-2 ceph-osd[79779]: osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 22 13:39:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 22 13:39:23 compute-2 python3.9[90249]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:39:23 compute-2 sudo[90247]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:24 compute-2 ceph-mon[77081]: 8.b deep-scrub starts
Jan 22 13:39:24 compute-2 ceph-mon[77081]: 8.b deep-scrub ok
Jan 22 13:39:24 compute-2 ceph-mon[77081]: 11.1a scrub starts
Jan 22 13:39:24 compute-2 ceph-mon[77081]: 11.1a scrub ok
Jan 22 13:39:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 13:39:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 13:39:24 compute-2 ceph-mon[77081]: osdmap e122: 3 total, 3 up, 3 in
Jan 22 13:39:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:24.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:24.917+0000 7f47f8ed4640 -1 osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:24 compute-2 ceph-osd[79779]: osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 22 13:39:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 22 13:39:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 22 13:39:25 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 123 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=121/122 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123 pruub=14.623571396s) [0] async=[0] r=-1 lpr=123 pi=[86,123)/1 crt=62'705 mlcod 62'705 active pruub 188.486572266s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:25 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 123 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=121/122 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123 pruub=14.623458862s) [0] r=-1 lpr=123 pi=[86,123)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 188.486572266s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:25 compute-2 sudo[90402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tocpkvommdpqxkxyiuqyhbuwqbrnoavg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089164.875561-687-250298960012374/AnsiballZ_getent.py'
Jan 22 13:39:25 compute-2 sudo[90402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:25 compute-2 python3.9[90404]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 13:39:25 compute-2 sudo[90402]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:25.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:25.923+0000 7f47f8ed4640 -1 osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:25 compute-2 ceph-osd[79779]: osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:26 compute-2 sudo[90556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwmswsgwtjuwpcedhmdlfltotvmmacwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089166.059002-717-106364939718101/AnsiballZ_getent.py'
Jan 22 13:39:26 compute-2 sudo[90556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:26 compute-2 python3.9[90558]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 13:39:26 compute-2 sudo[90556]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:26.940+0000 7f47f8ed4640 -1 osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:26 compute-2 ceph-osd[79779]: osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:27 compute-2 sudo[90636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:27 compute-2 sudo[90636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:27 compute-2 sudo[90636]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:27 compute-2 sudo[90684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:27 compute-2 sudo[90684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:27 compute-2 ceph-mon[77081]: pgmap v312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:27 compute-2 ceph-mon[77081]: 8.d scrub starts
Jan 22 13:39:27 compute-2 ceph-mon[77081]: 8.d scrub ok
Jan 22 13:39:27 compute-2 ceph-mon[77081]: 10.13 scrub starts
Jan 22 13:39:27 compute-2 ceph-mon[77081]: 10.13 scrub ok
Jan 22 13:39:27 compute-2 ceph-mon[77081]: osdmap e123: 3 total, 3 up, 3 in
Jan 22 13:39:27 compute-2 sudo[90684]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:27 compute-2 sudo[90759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmdnyzthlnostepkkrxcbxrvjfonnnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089166.9040577-741-184044425042016/AnsiballZ_group.py'
Jan 22 13:39:27 compute-2 sudo[90759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 22 13:39:27 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 124 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=124 pruub=12.977127075s) [0] r=-1 lpr=124 pi=[70,124)/1 crt=61'686 mlcod 0'0 active pruub 189.279891968s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:27 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 124 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=124 pruub=12.977048874s) [0] r=-1 lpr=124 pi=[70,124)/1 crt=61'686 mlcod 0'0 unknown NOTIFY pruub 189.279891968s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:27 compute-2 python3.9[90761]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:39:27 compute-2 sudo[90759]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 22 13:39:27 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 125 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] r=0 lpr=125 pi=[70,125)/1 crt=61'686 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:27 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 125 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] r=0 lpr=125 pi=[70,125)/1 crt=61'686 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:27.945+0000 7f47f8ed4640 -1 osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:27 compute-2 ceph-osd[79779]: osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Jan 22 13:39:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Jan 22 13:39:28 compute-2 sudo[90911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyzltfnopuxayhaclcoovwcqsvxxteqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089167.8868065-768-11349583193197/AnsiballZ_file.py'
Jan 22 13:39:28 compute-2 sudo[90911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:28 compute-2 python3.9[90913]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 13:39:28 compute-2 sudo[90911]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:28.952+0000 7f47f8ed4640 -1 osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:28 compute-2 ceph-osd[79779]: osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 22 13:39:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 11.e scrub starts
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 11.e scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 10.15 scrub starts
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 10.15 scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: pgmap v315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 13:39:29 compute-2 ceph-mon[77081]: osdmap e124: 3 total, 3 up, 3 in
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 11.14 scrub starts
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 11.14 scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: osdmap e125: 3 total, 3 up, 3 in
Jan 22 13:39:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 22 13:39:29 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 126 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=125/126 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] async=[0] r=0 lpr=125 pi=[70,125)/1 crt=61'686 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:29 compute-2 sudo[91064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemlzxycxqlhftnzfwnbaggfgkzeesid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089168.935196-800-226352370399032/AnsiballZ_dnf.py'
Jan 22 13:39:29 compute-2 sudo[91064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:29 compute-2 python3.9[91066]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:39:29 compute-2 ceph-mon[77081]: pgmap v317: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 22 B/s, 1 objects/s recovering
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 8.a deep-scrub starts
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 8.a deep-scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 10.5 scrub starts
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 10.5 scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 8.3 scrub starts
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:29 compute-2 ceph-mon[77081]: 8.3 scrub ok
Jan 22 13:39:29 compute-2 ceph-mon[77081]: osdmap e126: 3 total, 3 up, 3 in
Jan 22 13:39:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 13:39:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 13:39:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:29.940+0000 7f47f8ed4640 -1 osd.2 126 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:29 compute-2 ceph-osd[79779]: osd.2 126 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 22 13:39:30 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 127 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=125/126 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127 pruub=14.943853378s) [0] async=[0] r=-1 lpr=127 pi=[70,127)/1 crt=61'686 mlcod 61'686 active pruub 193.897628784s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:30 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 127 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=125/126 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127 pruub=14.943747520s) [0] r=-1 lpr=127 pi=[70,127)/1 crt=61'686 mlcod 0'0 unknown NOTIFY pruub 193.897628784s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 13:39:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 13:39:30 compute-2 ceph-mon[77081]: 10.18 scrub starts
Jan 22 13:39:30 compute-2 ceph-mon[77081]: 10.18 scrub ok
Jan 22 13:39:30 compute-2 ceph-mon[77081]: pgmap v320: 305 pgs: 1 active+remapped, 1 peering, 2 active+clean+laggy, 301 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 49 B/s, 2 objects/s recovering
Jan 22 13:39:30 compute-2 ceph-mon[77081]: 11.1b scrub starts
Jan 22 13:39:30 compute-2 ceph-mon[77081]: 11.1b scrub ok
Jan 22 13:39:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:30 compute-2 ceph-mon[77081]: osdmap e127: 3 total, 3 up, 3 in
Jan 22 13:39:30 compute-2 sudo[91064]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:30.943+0000 7f47f8ed4640 -1 osd.2 127 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:30 compute-2 ceph-osd[79779]: osd.2 127 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 22 13:39:31 compute-2 sudo[91218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbnkknwagcwmnhvdwcwxpabxopiyrkxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089171.3022048-825-124750823632027/AnsiballZ_file.py'
Jan 22 13:39:31 compute-2 sudo[91218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:31 compute-2 python3.9[91220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:31 compute-2 sudo[91218]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:31.963+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:31 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 22 13:39:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 22 13:39:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:32 compute-2 ceph-mon[77081]: osdmap e128: 3 total, 3 up, 3 in
Jan 22 13:39:32 compute-2 sudo[91371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdtojktgjktwuiiitljnbprtasidzpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089172.0492651-850-145675602334691/AnsiballZ_stat.py'
Jan 22 13:39:32 compute-2 sudo[91371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:32 compute-2 python3.9[91373]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:39:32 compute-2 sudo[91371]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:32 compute-2 sudo[91449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrahhqbvwwkvkvobxzwcupfbjbjkkyti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089172.0492651-850-145675602334691/AnsiballZ_file.py'
Jan 22 13:39:32 compute-2 sudo[91449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:32 compute-2 python3.9[91451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:32 compute-2 sudo[91449]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:33.009+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:33 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:33 compute-2 ceph-mon[77081]: pgmap v323: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 1 objects/s recovering
Jan 22 13:39:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:33 compute-2 ceph-mon[77081]: 11.16 scrub starts
Jan 22 13:39:33 compute-2 ceph-mon[77081]: 11.16 scrub ok
Jan 22 13:39:33 compute-2 ceph-mon[77081]: 8.18 deep-scrub starts
Jan 22 13:39:33 compute-2 ceph-mon[77081]: 8.18 deep-scrub ok
Jan 22 13:39:33 compute-2 sudo[91601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twoqlwxypxirjngpbhegjyrncycumpko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089173.437428-888-101338522834828/AnsiballZ_stat.py'
Jan 22 13:39:33 compute-2 sudo[91601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:39:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:39:33 compute-2 python3.9[91603]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:39:33 compute-2 sudo[91601]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 22 13:39:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:34.036+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:34 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 22 13:39:34 compute-2 sudo[91679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppfshlakdpbwjncjufocqwlybokqhwhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089173.437428-888-101338522834828/AnsiballZ_file.py'
Jan 22 13:39:34 compute-2 sudo[91679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:34 compute-2 ceph-mon[77081]: 8.17 scrub starts
Jan 22 13:39:34 compute-2 ceph-mon[77081]: 8.17 scrub ok
Jan 22 13:39:34 compute-2 python3.9[91681]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:39:34 compute-2 sudo[91679]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:34.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:35.050+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:35 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:35 compute-2 sudo[91832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owyiiqpmhazxbqelgqmoonqblkyzywmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089174.9116957-932-126501679277096/AnsiballZ_dnf.py'
Jan 22 13:39:35 compute-2 sudo[91832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:35 compute-2 ceph-mon[77081]: pgmap v324: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:35 compute-2 ceph-mon[77081]: 8.15 scrub starts
Jan 22 13:39:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:35 compute-2 ceph-mon[77081]: 8.15 scrub ok
Jan 22 13:39:35 compute-2 ceph-mon[77081]: 10.1b scrub starts
Jan 22 13:39:35 compute-2 ceph-mon[77081]: 10.1b scrub ok
Jan 22 13:39:35 compute-2 python3.9[91834]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:39:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:39:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:35.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:39:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:36.004+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:36 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.382615) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176382689, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7355, "num_deletes": 256, "total_data_size": 14124223, "memory_usage": 14346272, "flush_reason": "Manual Compaction"}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176440092, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 8798958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 7360, "table_properties": {"data_size": 8768094, "index_size": 20189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92051, "raw_average_key_size": 24, "raw_value_size": 8693720, "raw_average_value_size": 2268, "num_data_blocks": 884, "num_entries": 3833, "num_filter_entries": 3833, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088929, "oldest_key_time": 1769088929, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 57555 microseconds, and 15968 cpu microseconds.
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.440171) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 8798958 bytes OK
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.440195) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.444289) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.444326) EVENT_LOG_v1 {"time_micros": 1769089176444307, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.444348) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14083390, prev total WAL file size 14083390, number of live WAL files 2.
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.446645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8592KB) 8(1648B)]
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176446763, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 8800606, "oldest_snapshot_seqno": -1}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3580 keys, 8795175 bytes, temperature: kUnknown
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176503978, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 8795175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8765003, "index_size": 20142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 87835, "raw_average_key_size": 24, "raw_value_size": 8693778, "raw_average_value_size": 2428, "num_data_blocks": 884, "num_entries": 3580, "num_filter_entries": 3580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.504246) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 8795175 bytes
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.505669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.6 rd, 153.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.4, 0.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3838, records dropped: 258 output_compression: NoCompression
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.505698) EVENT_LOG_v1 {"time_micros": 1769089176505687, "job": 4, "event": "compaction_finished", "compaction_time_micros": 57300, "compaction_time_cpu_micros": 16492, "output_level": 6, "num_output_files": 1, "total_output_size": 8795175, "num_input_records": 3838, "num_output_records": 3580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176507089, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176507126, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 22 13:39:36 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.446505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:39:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:36 compute-2 sudo[91832]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:36.995+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:36 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:37 compute-2 ceph-mon[77081]: pgmap v325: 305 pgs: 1 peering, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:37 compute-2 ceph-mon[77081]: 10.2 scrub starts
Jan 22 13:39:37 compute-2 ceph-mon[77081]: 10.2 scrub ok
Jan 22 13:39:37 compute-2 python3.9[91987]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:39:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:39:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:39:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:38.028+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:38 compute-2 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 22 13:39:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 22 13:39:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 22 13:39:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 13:39:38 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:38 compute-2 ceph-mon[77081]: 11.17 scrub starts
Jan 22 13:39:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:38 compute-2 ceph-mon[77081]: 11.17 scrub ok
Jan 22 13:39:38 compute-2 python3.9[92140]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 13:39:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:39.002+0000 7f47f8ed4640 -1 osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:39 compute-2 ceph-osd[79779]: osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:39 compute-2 python3.9[92290]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:39:39 compute-2 ceph-mon[77081]: pgmap v326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 13:39:39 compute-2 ceph-mon[77081]: osdmap e129: 3 total, 3 up, 3 in
Jan 22 13:39:39 compute-2 ceph-mon[77081]: 8.4 scrub starts
Jan 22 13:39:39 compute-2 ceph-mon[77081]: 8.4 scrub ok
Jan 22 13:39:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:39.990+0000 7f47f8ed4640 -1 osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:39 compute-2 ceph-osd[79779]: osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:40 compute-2 ceph-mon[77081]: pgmap v328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 13:39:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 22 13:39:40 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 130 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=130) [2] r=0 lpr=130 pi=[98,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:40 compute-2 sudo[92441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enqipucdzcxkyuybzlztfivkcujriwdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089180.0721397-1056-260646277497125/AnsiballZ_systemd.py'
Jan 22 13:39:40 compute-2 sudo[92441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:40.980+0000 7f47f8ed4640 -1 osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:40 compute-2 ceph-osd[79779]: osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:41 compute-2 python3.9[92443]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:39:41 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 13:39:41 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 13:39:41 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 13:39:41 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 13:39:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 13:39:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:41 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 13:39:42 compute-2 ceph-osd[79779]: osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:42.020+0000 7f47f8ed4640 -1 osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:42 compute-2 sudo[92441]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:42 compute-2 ceph-mon[77081]: 10.19 scrub starts
Jan 22 13:39:42 compute-2 ceph-mon[77081]: 10.19 scrub ok
Jan 22 13:39:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 13:39:42 compute-2 ceph-mon[77081]: osdmap e130: 3 total, 3 up, 3 in
Jan 22 13:39:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 22 13:39:42 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 131 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=-1 lpr=131 pi=[98,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:42 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 131 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=-1 lpr=131 pi=[98,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 13:39:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:42.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:42 compute-2 python3.9[92605]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 13:39:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:43.055+0000 7f47f8ed4640 -1 osd.2 131 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:43 compute-2 ceph-osd[79779]: osd.2 131 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 22 13:39:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 22 13:39:43 compute-2 ceph-mon[77081]: 10.8 scrub starts
Jan 22 13:39:43 compute-2 ceph-mon[77081]: 10.8 scrub ok
Jan 22 13:39:43 compute-2 ceph-mon[77081]: pgmap v330: 305 pgs: 1 unknown, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:43 compute-2 ceph-mon[77081]: osdmap e131: 3 total, 3 up, 3 in
Jan 22 13:39:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 22 13:39:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:44.062+0000 7f47f8ed4640 -1 osd.2 132 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:44 compute-2 ceph-osd[79779]: osd.2 132 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:44 compute-2 ceph-mon[77081]: 8.5 scrub starts
Jan 22 13:39:44 compute-2 ceph-mon[77081]: 8.5 scrub ok
Jan 22 13:39:44 compute-2 ceph-mon[77081]: osdmap e132: 3 total, 3 up, 3 in
Jan 22 13:39:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 13:39:44 compute-2 ceph-mon[77081]: 9.e scrub starts
Jan 22 13:39:44 compute-2 ceph-mon[77081]: 9.e scrub ok
Jan 22 13:39:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 22 13:39:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133) [2] r=0 lpr=133 pi=[98,133)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 13:39:44 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133) [2] r=0 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 13:39:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:44.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:45.021+0000 7f47f8ed4640 -1 osd.2 133 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:45 compute-2 ceph-osd[79779]: osd.2 133 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:45 compute-2 ceph-mon[77081]: pgmap v333: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 22 13:39:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 13:39:45 compute-2 ceph-mon[77081]: osdmap e133: 3 total, 3 up, 3 in
Jan 22 13:39:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 22 13:39:45 compute-2 ceph-osd[79779]: osd.2 pg_epoch: 134 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=133/134 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133) [2] r=0 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 13:39:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:45.980+0000 7f47f8ed4640 -1 osd.2 134 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:45 compute-2 ceph-osd[79779]: osd.2 134 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 22 13:39:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 22 13:39:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:39:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:46 compute-2 ceph-mon[77081]: osdmap e134: 3 total, 3 up, 3 in
Jan 22 13:39:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 13:39:46 compute-2 ceph-mon[77081]: 9.6 scrub starts
Jan 22 13:39:46 compute-2 ceph-mon[77081]: 9.6 scrub ok
Jan 22 13:39:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:46.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 22 13:39:46 compute-2 sudo[92757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqcafrybqvrblcfqayrwjzlmwdodaqee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089186.3447487-1227-44813201446346/AnsiballZ_systemd.py'
Jan 22 13:39:46 compute-2 sudo[92757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:46 compute-2 python3.9[92759]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:39:47 compute-2 ceph-osd[79779]: osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 22 13:39:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:47.001+0000 7f47f8ed4640 -1 osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 22 13:39:47 compute-2 sudo[92761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:47 compute-2 sudo[92761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:47 compute-2 sudo[92761]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:47 compute-2 sudo[92786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:47 compute-2 sudo[92786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:47 compute-2 sudo[92786]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:48 compute-2 sudo[92757]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:48.026+0000 7f47f8ed4640 -1 osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:48 compute-2 ceph-osd[79779]: osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:48 compute-2 sudo[92962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lczkxgjgkemnhajjqiwoyuwmdwbzrjuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089188.176471-1227-253877065163190/AnsiballZ_systemd.py'
Jan 22 13:39:48 compute-2 sudo[92962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:49.032+0000 7f47f8ed4640 -1 osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:49 compute-2 ceph-osd[79779]: osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 22 13:39:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 22 13:39:49 compute-2 sudo[92965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:49 compute-2 sudo[92965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-2 sudo[92965]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-2 sudo[92990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:39:49 compute-2 sudo[92990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-2 sudo[92990]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-2 sudo[93015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:49 compute-2 sudo[93015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-2 sudo[93015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-2 sudo[93040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:39:49 compute-2 sudo[93040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:49 compute-2 ceph-mon[77081]: pgmap v336: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s; 164 B/s, 3 objects/s recovering
Jan 22 13:39:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:49 compute-2 ceph-mon[77081]: 8.f scrub starts
Jan 22 13:39:49 compute-2 ceph-mon[77081]: 8.f scrub ok
Jan 22 13:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 13:39:49 compute-2 ceph-mon[77081]: osdmap e135: 3 total, 3 up, 3 in
Jan 22 13:39:49 compute-2 ceph-mon[77081]: 8.c scrub starts
Jan 22 13:39:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:49 compute-2 ceph-mon[77081]: 8.c scrub ok
Jan 22 13:39:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 22 13:39:49 compute-2 python3.9[92964]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:39:49 compute-2 sudo[92962]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:49.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:50.050+0000 7f47f8ed4640 -1 osd.2 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:50 compute-2 ceph-osd[79779]: osd.2 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 22 13:39:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 22 13:39:50 compute-2 sshd-session[83819]: Connection closed by 192.168.122.30 port 52944
Jan 22 13:39:50 compute-2 sshd-session[83816]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:39:50 compute-2 systemd[1]: session-34.scope: Deactivated successfully.
Jan 22 13:39:50 compute-2 systemd[1]: session-34.scope: Consumed 1min 11.713s CPU time.
Jan 22 13:39:50 compute-2 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Jan 22 13:39:50 compute-2 systemd-logind[787]: Removed session 34.
Jan 22 13:39:50 compute-2 ceph-mon[77081]: pgmap v338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 16 op/s; 155 B/s, 3 objects/s recovering
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.1 scrub starts
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.1 scrub ok
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 11.13 scrub starts
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 11.13 scrub ok
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.19 scrub starts
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.19 scrub ok
Jan 22 13:39:50 compute-2 ceph-mon[77081]: osdmap e136: 3 total, 3 up, 3 in
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.12 deep-scrub starts
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.12 deep-scrub ok
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.b scrub starts
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:50 compute-2 ceph-mon[77081]: 9.b scrub ok
Jan 22 13:39:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:50.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 22 13:39:50 compute-2 podman[93162]: 2026-01-22 13:39:50.546956949 +0000 UTC m=+0.755024915 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:39:50 compute-2 podman[93162]: 2026-01-22 13:39:50.685629727 +0000 UTC m=+0.893697633 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:39:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:51.047+0000 7f47f8ed4640 -1 osd.2 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:51 compute-2 ceph-osd[79779]: osd.2 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:39:51 compute-2 podman[93319]: 2026-01-22 13:39:51.352014722 +0000 UTC m=+0.058827194 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:39:51 compute-2 podman[93319]: 2026-01-22 13:39:51.363684213 +0000 UTC m=+0.070496665 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:39:51 compute-2 ceph-mon[77081]: pgmap v340: 305 pgs: 1 active+remapped, 2 active+clean+laggy, 302 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-2 ceph-mon[77081]: osdmap e137: 3 total, 3 up, 3 in
Jan 22 13:39:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-2 ceph-mon[77081]: 9.a scrub starts
Jan 22 13:39:51 compute-2 ceph-mon[77081]: 9.a scrub ok
Jan 22 13:39:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 22 13:39:51 compute-2 podman[93386]: 2026-01-22 13:39:51.582608886 +0000 UTC m=+0.065138132 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.component=keepalived-container, release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, name=keepalived, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 22 13:39:51 compute-2 podman[93386]: 2026-01-22 13:39:51.596690023 +0000 UTC m=+0.079219249 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Jan 22 13:39:51 compute-2 sudo[93040]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:51 compute-2 sudo[93419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:51 compute-2 sudo[93419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:51 compute-2 sudo[93419]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:51 compute-2 sudo[93444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:39:51 compute-2 sudo[93444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:51 compute-2 sudo[93444]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 13:39:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:51.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 13:39:51 compute-2 sudo[93469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:51 compute-2 sudo[93469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:51 compute-2 sudo[93469]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:51 compute-2 sudo[93494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:39:51 compute-2 sudo[93494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:52.004+0000 7f47f8ed4640 -1 osd.2 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:52 compute-2 ceph-osd[79779]: osd.2 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:52 compute-2 sudo[93494]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:52 compute-2 ceph-mon[77081]: pgmap v342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:52 compute-2 ceph-mon[77081]: osdmap e138: 3 total, 3 up, 3 in
Jan 22 13:39:52 compute-2 ceph-mon[77081]: 9.d scrub starts
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:52 compute-2 ceph-mon[77081]: 9.d scrub ok
Jan 22 13:39:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:39:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:39:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 22 13:39:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:52.955+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:53 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:53 compute-2 ceph-mon[77081]: osdmap e139: 3 total, 3 up, 3 in
Jan 22 13:39:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:53.935+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:54 compute-2 ceph-mon[77081]: pgmap v345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:54.958+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 22 13:39:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 22 13:39:55 compute-2 ceph-mon[77081]: 9.3 scrub starts
Jan 22 13:39:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:55 compute-2 ceph-mon[77081]: 9.3 scrub ok
Jan 22 13:39:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:55.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:55.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:55 compute-2 sshd-session[93553]: Accepted publickey for zuul from 192.168.122.30 port 56750 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:39:55 compute-2 systemd-logind[787]: New session 35 of user zuul.
Jan 22 13:39:56 compute-2 systemd[1]: Started Session 35 of User zuul.
Jan 22 13:39:56 compute-2 sshd-session[93553]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:39:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:39:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:56 compute-2 ceph-mon[77081]: 9.1a scrub starts
Jan 22 13:39:56 compute-2 ceph-mon[77081]: 9.1a scrub ok
Jan 22 13:39:56 compute-2 ceph-mon[77081]: pgmap v346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:56.951+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:57 compute-2 python3.9[93707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:39:57 compute-2 ceph-mon[77081]: 9.1b scrub starts
Jan 22 13:39:57 compute-2 ceph-mon[77081]: 9.1b scrub ok
Jan 22 13:39:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:57.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:57.907+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:58 compute-2 sudo[93861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjuxfvzpodgulhxdtmugaaejneuvtnkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089197.8592987-70-168240436408299/AnsiballZ_getent.py'
Jan 22 13:39:58 compute-2 sudo[93861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:58 compute-2 python3.9[93863]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 13:39:58 compute-2 sudo[93861]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:39:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:58.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:39:58 compute-2 ceph-mon[77081]: pgmap v347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:39:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:39:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:58 compute-2 ceph-mon[77081]: 9.1e scrub starts
Jan 22 13:39:58 compute-2 ceph-mon[77081]: 9.1e scrub ok
Jan 22 13:39:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:58.885+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:59 compute-2 sudo[94015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnvcvrkxawxnxhafcjgpumlbpmletzty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089199.0865328-106-250211137328423/AnsiballZ_setup.py'
Jan 22 13:39:59 compute-2 sudo[94015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:39:59 compute-2 sudo[94018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:39:59 compute-2 sudo[94018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:59 compute-2 sudo[94018]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:59 compute-2 python3.9[94017]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:39:59 compute-2 ceph-mon[77081]: 9.f scrub starts
Jan 22 13:39:59 compute-2 ceph-mon[77081]: 9.f scrub ok
Jan 22 13:39:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:39:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:39:59 compute-2 sudo[94043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:39:59 compute-2 sudo[94043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:39:59 compute-2 sudo[94043]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:39:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:39:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:59.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:39:59 compute-2 sudo[94015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:39:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:59.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:39:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:00 compute-2 sudo[94150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oewyniyzvqdnrhoytglrxjvkxwuuqaks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089199.0865328-106-250211137328423/AnsiballZ_dnf.py'
Jan 22 13:40:00 compute-2 sudo[94150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:40:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:40:00 compute-2 python3.9[94152]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:40:00 compute-2 ceph-mon[77081]: pgmap v348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 13:40:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 13:40:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:00.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 22 13:40:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 22 13:40:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:01 compute-2 ceph-mon[77081]: 9.1f scrub starts
Jan 22 13:40:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:01 compute-2 ceph-mon[77081]: 9.7 scrub starts
Jan 22 13:40:01 compute-2 ceph-mon[77081]: 9.1f scrub ok
Jan 22 13:40:01 compute-2 ceph-mon[77081]: 9.7 scrub ok
Jan 22 13:40:01 compute-2 sudo[94150]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:01.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:01.889+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:02.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:02 compute-2 sudo[94304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigyyekogbpgaeakgwlczacswmevhmdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089202.291414-148-115015369520229/AnsiballZ_dnf.py'
Jan 22 13:40:02 compute-2 sudo[94304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:02 compute-2 ceph-mon[77081]: pgmap v349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:02 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:02 compute-2 python3.9[94306]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:02.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:03.934+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:04 compute-2 sudo[94304]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:04 compute-2 ceph-mon[77081]: pgmap v350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:04.897+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:04 compute-2 sudo[94458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dskdqheuugjkshkgszlgkzmhpmuxaboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089204.3367505-172-20542497531126/AnsiballZ_systemd.py'
Jan 22 13:40:04 compute-2 sudo[94458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:05 compute-2 python3.9[94460]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:40:05 compute-2 sudo[94458]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 22 13:40:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:05.882+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 22 13:40:06 compute-2 python3.9[94613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:06.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:06 compute-2 sudo[94764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfakecjjlfvrzunafuyngwuovtbvhfie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089206.475343-226-86771742979233/AnsiballZ_sefcontext.py'
Jan 22 13:40:06 compute-2 sudo[94764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:06 compute-2 ceph-mon[77081]: pgmap v351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:06 compute-2 ceph-mon[77081]: 9.13 scrub starts
Jan 22 13:40:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:06 compute-2 ceph-mon[77081]: 9.13 scrub ok
Jan 22 13:40:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:06.904+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 22 13:40:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 22 13:40:07 compute-2 python3.9[94766]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 13:40:07 compute-2 sudo[94764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:07 compute-2 sudo[94791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:40:07 compute-2 sudo[94791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:40:07 compute-2 sudo[94791]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:07 compute-2 sudo[94816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:40:07 compute-2 sudo[94816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:40:07 compute-2 sudo[94816]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:07 compute-2 ceph-mon[77081]: 9.17 scrub starts
Jan 22 13:40:07 compute-2 ceph-mon[77081]: 9.17 scrub ok
Jan 22 13:40:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:07.911+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:08 compute-2 python3.9[94966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:08.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 22 13:40:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:08.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 22 13:40:08 compute-2 ceph-mon[77081]: pgmap v352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:09 compute-2 sudo[95123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjqdndgllrulmffsprnjhcolcqefkiqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089209.0282636-280-195100465869932/AnsiballZ_dnf.py'
Jan 22 13:40:09 compute-2 sudo[95123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:09 compute-2 python3.9[95125]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:09.874+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 22 13:40:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 22 13:40:09 compute-2 ceph-mon[77081]: 9.5 scrub starts
Jan 22 13:40:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:09 compute-2 ceph-mon[77081]: 9.5 scrub ok
Jan 22 13:40:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:10.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:10 compute-2 sudo[95123]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:10.899+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:10 compute-2 ceph-mon[77081]: pgmap v353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:10 compute-2 ceph-mon[77081]: 9.18 scrub starts
Jan 22 13:40:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:10 compute-2 ceph-mon[77081]: 9.18 scrub ok
Jan 22 13:40:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:11 compute-2 sudo[95277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfneyntblabvsjrlihzrgaetrcqstdce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089211.2702909-305-186177960330632/AnsiballZ_command.py'
Jan 22 13:40:11 compute-2 sudo[95277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:11.941+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 22 13:40:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:11 compute-2 python3.9[95279]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 22 13:40:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:12 compute-2 sudo[95277]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:12.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:13 compute-2 ceph-mon[77081]: pgmap v354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:13 compute-2 ceph-mon[77081]: 9.8 scrub starts
Jan 22 13:40:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:13 compute-2 ceph-mon[77081]: 9.8 scrub ok
Jan 22 13:40:13 compute-2 sudo[95565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwabpvxmewrqhgmzelsekbeaghzhawvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089212.9746058-329-164940337899677/AnsiballZ_file.py'
Jan 22 13:40:13 compute-2 sudo[95565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:13 compute-2 python3.9[95567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 13:40:13 compute-2 sudo[95565]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:40:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:40:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:13.901+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:14 compute-2 ceph-mon[77081]: 9.15 scrub starts
Jan 22 13:40:14 compute-2 ceph-mon[77081]: 9.15 scrub ok
Jan 22 13:40:14 compute-2 python3.9[95718]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:40:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:40:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:14.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:15 compute-2 sudo[95870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtocjfnvjposqtnmtxpzyjldpcndger ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089214.8257027-376-71158429308317/AnsiballZ_dnf.py'
Jan 22 13:40:15 compute-2 sudo[95870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:15 compute-2 ceph-mon[77081]: pgmap v355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:15 compute-2 python3.9[95872]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:15.870+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:16 compute-2 sudo[95870]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:16.899+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 22 13:40:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 22 13:40:17 compute-2 sudo[96024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnyyofnudmiolqydinmbkduppkqxybm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089217.1094682-403-21655591777465/AnsiballZ_dnf.py'
Jan 22 13:40:17 compute-2 sudo[96024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:17 compute-2 ceph-mon[77081]: pgmap v356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:17 compute-2 ceph-mon[77081]: 9.9 scrub starts
Jan 22 13:40:17 compute-2 ceph-mon[77081]: 9.9 scrub ok
Jan 22 13:40:17 compute-2 python3.9[96026]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:17.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:18.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:18 compute-2 sudo[96024]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:18.922+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:19 compute-2 sshd-session[96029]: Invalid user ubuntu from 92.118.39.95 port 59252
Jan 22 13:40:19 compute-2 sshd-session[96029]: Connection closed by invalid user ubuntu 92.118.39.95 port 59252 [preauth]
Jan 22 13:40:19 compute-2 ceph-mon[77081]: pgmap v357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:19 compute-2 sudo[96180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqqzzxoietqlghvufohjbizeuuswwolw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089219.3853893-440-259798572092948/AnsiballZ_stat.py'
Jan 22 13:40:19 compute-2 sudo[96180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:19 compute-2 python3.9[96182]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:19 compute-2 sudo[96180]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:19.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:19.929+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:20 compute-2 sudo[96335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydsksflmvshvotmxeyzavfwjaadfamke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089220.094039-463-218730051801909/AnsiballZ_slurp.py'
Jan 22 13:40:20 compute-2 sudo[96335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:20.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:20 compute-2 python3.9[96337]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 22 13:40:20 compute-2 sudo[96335]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:20.897+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:21 compute-2 ceph-mon[77081]: pgmap v358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:21.922+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:22 compute-2 sshd-session[93556]: Connection closed by 192.168.122.30 port 56750
Jan 22 13:40:22 compute-2 sshd-session[93553]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:40:22 compute-2 systemd[1]: session-35.scope: Deactivated successfully.
Jan 22 13:40:22 compute-2 systemd[1]: session-35.scope: Consumed 17.436s CPU time.
Jan 22 13:40:22 compute-2 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Jan 22 13:40:22 compute-2 systemd-logind[787]: Removed session 35.
Jan 22 13:40:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:22.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:22.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 22 13:40:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 22 13:40:23 compute-2 ceph-mon[77081]: pgmap v359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:23 compute-2 ceph-mon[77081]: 9.16 scrub starts
Jan 22 13:40:23 compute-2 ceph-mon[77081]: 9.16 scrub ok
Jan 22 13:40:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:23.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 22 13:40:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 22 13:40:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:40:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:24.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:40:24 compute-2 ceph-mon[77081]: pgmap v360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:24 compute-2 ceph-mon[77081]: 9.1d scrub starts
Jan 22 13:40:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:24 compute-2 ceph-mon[77081]: 9.1d scrub ok
Jan 22 13:40:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:24.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:25.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:26.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:26 compute-2 ceph-mon[77081]: pgmap v361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:26.884+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:27 compute-2 sshd-session[96366]: Accepted publickey for zuul from 192.168.122.30 port 37554 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:40:27 compute-2 systemd-logind[787]: New session 36 of user zuul.
Jan 22 13:40:27 compute-2 systemd[1]: Started Session 36 of User zuul.
Jan 22 13:40:27 compute-2 sshd-session[96366]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:40:27 compute-2 sudo[96422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:40:27 compute-2 sudo[96422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:40:27 compute-2 sudo[96422]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:27 compute-2 sudo[96447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:40:27 compute-2 sudo[96447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:40:27 compute-2 sudo[96447]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:27.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:28 compute-2 python3.9[96569]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:28 compute-2 ceph-mon[77081]: pgmap v362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:28 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:28.907+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:29 compute-2 sshd-session[96651]: Invalid user sol from 45.148.10.240 port 40950
Jan 22 13:40:29 compute-2 python3.9[96726]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:40:29 compute-2 sshd-session[96651]: Connection closed by invalid user sol 45.148.10.240 port 40950 [preauth]
Jan 22 13:40:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:29.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:30 compute-2 python3.9[96920]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:30.943+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:31 compute-2 ceph-mon[77081]: pgmap v363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:31 compute-2 sshd-session[96369]: Connection closed by 192.168.122.30 port 37554
Jan 22 13:40:31 compute-2 sshd-session[96366]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:40:31 compute-2 systemd[1]: session-36.scope: Deactivated successfully.
Jan 22 13:40:31 compute-2 systemd[1]: session-36.scope: Consumed 2.145s CPU time.
Jan 22 13:40:31 compute-2 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Jan 22 13:40:31 compute-2 systemd-logind[787]: Removed session 36.
Jan 22 13:40:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:31.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:31.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:32.941+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:33 compute-2 ceph-mon[77081]: pgmap v364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:33.960+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:34.979+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:35 compute-2 ceph-mon[77081]: pgmap v365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:35.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:35.967+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:36.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:36 compute-2 ceph-mon[77081]: pgmap v366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:36.994+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:37 compute-2 sshd-session[96950]: Accepted publickey for zuul from 192.168.122.30 port 50508 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:40:37 compute-2 systemd-logind[787]: New session 37 of user zuul.
Jan 22 13:40:37 compute-2 systemd[1]: Started Session 37 of User zuul.
Jan 22 13:40:37 compute-2 sshd-session[96950]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:40:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:37.951+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:37 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:38 compute-2 python3.9[97103]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:38.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:39 compute-2 ceph-mon[77081]: pgmap v367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:39 compute-2 python3.9[97258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:39.961+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:40 compute-2 sudo[97413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uayguirnabkmyhwnxuskgthbbvfhwlay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089240.1101239-82-263130373079869/AnsiballZ_setup.py'
Jan 22 13:40:40 compute-2 sudo[97413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:40 compute-2 python3.9[97415]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:40:40 compute-2 sudo[97413]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:40.999+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:41 compute-2 sudo[97497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkszolkvjpyhuocnzirswmykagmlrppd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089240.1101239-82-263130373079869/AnsiballZ_dnf.py'
Jan 22 13:40:41 compute-2 sudo[97497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:41 compute-2 ceph-mon[77081]: pgmap v368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:41 compute-2 python3.9[97499]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:41.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:42.038+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:42 compute-2 sudo[97497]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:43.017+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:43 compute-2 sudo[97651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixwjoirmpwpxqpaprljgyftueaftlbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089243.0966055-119-243635950923324/AnsiballZ_setup.py'
Jan 22 13:40:43 compute-2 sudo[97651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:43 compute-2 ceph-mon[77081]: pgmap v369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:43 compute-2 python3.9[97653]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:40:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:43 compute-2 sudo[97651]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:44.029+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:44.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:44 compute-2 sudo[97847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwlojnzquwiyvqxayrnryjulfpkhygbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089244.4695883-151-231902394557595/AnsiballZ_file.py'
Jan 22 13:40:44 compute-2 sudo[97847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-2 python3.9[97849]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:40:45 compute-2 sudo[97847]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:45.072+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-2 sudo[97999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiqacvafaxqrmkugtxpkzxdljxfeawvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089245.361016-176-78426071457562/AnsiballZ_command.py'
Jan 22 13:40:45 compute-2 sudo[97999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:45 compute-2 ceph-mon[77081]: pgmap v370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:45 compute-2 python3.9[98001]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:45 compute-2 sudo[97999]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:46.095+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:46 compute-2 sudo[98165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wytzhmhcixxrymgicvkkxhndutbguada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089246.4058237-199-58215904524688/AnsiballZ_stat.py'
Jan 22 13:40:46 compute-2 sudo[98165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:46 compute-2 ceph-mon[77081]: pgmap v371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:47 compute-2 python3.9[98167]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:40:47 compute-2 sudo[98165]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:47.141+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:47 compute-2 sudo[98243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clgkxwaxhhcanoplajklyodsufqbyucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089246.4058237-199-58215904524688/AnsiballZ_file.py'
Jan 22 13:40:47 compute-2 sudo[98243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:47 compute-2 python3.9[98245]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:40:47 compute-2 sudo[98243]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:47 compute-2 sudo[98270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:40:47 compute-2 sudo[98270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:40:47 compute-2 sudo[98270]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:47 compute-2 sudo[98295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:40:47 compute-2 sudo[98295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:40:47 compute-2 sudo[98295]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:47.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:48.151+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:48 compute-2 sudo[98445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wipaexvvuygujtbcevhdoknweksgvcnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089247.9259472-236-147431042197980/AnsiballZ_stat.py'
Jan 22 13:40:48 compute-2 sudo[98445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:48 compute-2 python3.9[98448]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:40:48 compute-2 sudo[98445]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:48.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:48 compute-2 sudo[98524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdrfbqsranzbagghseglbdwysslsarvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089247.9259472-236-147431042197980/AnsiballZ_file.py'
Jan 22 13:40:48 compute-2 sudo[98524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:48 compute-2 ceph-mon[77081]: pgmap v372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:48 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:49 compute-2 python3.9[98526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:49 compute-2 sudo[98524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:49.146+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:49 compute-2 sudo[98676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznxldvzhsuxteukcwpygskgbiaafwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089249.3291593-274-231823752518514/AnsiballZ_ini_file.py'
Jan 22 13:40:49 compute-2 sudo[98676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:49 compute-2 python3.9[98678]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:49 compute-2 sudo[98676]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:50.145+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:50 compute-2 sudo[98828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmtquufbtfwsdvmydcgsmbvcepchlscz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089250.0490003-274-159085933996414/AnsiballZ_ini_file.py'
Jan 22 13:40:50 compute-2 sudo[98828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:50 compute-2 python3.9[98830]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:50 compute-2 sudo[98828]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:50 compute-2 sudo[98981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmjjvuixgcwzwmgmdcwnnqrykdiuytq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089250.6035728-274-212126182000605/AnsiballZ_ini_file.py'
Jan 22 13:40:50 compute-2 sudo[98981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:51 compute-2 ceph-mon[77081]: pgmap v373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:51 compute-2 python3.9[98983]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:51 compute-2 sudo[98981]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:51.168+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:51 compute-2 sudo[99133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-misbxiphuaiflqyfvphhulejnidhtvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089251.1854932-274-130540914960408/AnsiballZ_ini_file.py'
Jan 22 13:40:51 compute-2 sudo[99133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:51 compute-2 python3.9[99135]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:40:51 compute-2 sudo[99133]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:52.164+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:52.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:52 compute-2 sudo[99286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgywcnezqsltbvcmgxiqlkwqengmvzuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089252.5709162-367-165179095883730/AnsiballZ_dnf.py'
Jan 22 13:40:52 compute-2 sudo[99286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:53 compute-2 ceph-mon[77081]: pgmap v374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:53 compute-2 python3.9[99288]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:40:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:53.167+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:54.195+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:54 compute-2 sudo[99286]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:55 compute-2 ceph-mon[77081]: pgmap v375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:55.234+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:55 compute-2 sudo[99440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqsbbobhaclthjcenzqulaavadkwhluq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089255.051077-400-83416960171905/AnsiballZ_setup.py'
Jan 22 13:40:55 compute-2 sudo[99440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:55 compute-2 python3.9[99442]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:40:55 compute-2 sudo[99440]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:56.267+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:56 compute-2 sudo[99594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdlifgepzdqzeklatjxgxrptikaqugyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089256.012341-424-260128281959978/AnsiballZ_stat.py'
Jan 22 13:40:56 compute-2 sudo[99594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:56 compute-2 python3.9[99596]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:56 compute-2 sudo[99594]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:40:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:56.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:57 compute-2 sudo[99747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egxqmqzboyeipycegeioemdnfwrsbpjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089256.841019-451-207831450185891/AnsiballZ_stat.py'
Jan 22 13:40:57 compute-2 sudo[99747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:57.228+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:57 compute-2 ceph-mon[77081]: pgmap v376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:57 compute-2 python3.9[99749]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:40:57 compute-2 sudo[99747]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:57.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:40:58 compute-2 sudo[99899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahjsrgdzowarzzpqehrzjydgxosfarar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089257.7425315-481-197100196268700/AnsiballZ_command.py'
Jan 22 13:40:58 compute-2 sudo[99899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:58.184+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:58 compute-2 python3.9[99901]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:40:58 compute-2 sudo[99899]: pam_unix(sudo:session): session closed for user root
Jan 22 13:40:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:40:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:40:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:40:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:59.199+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:40:59 compute-2 sudo[100053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vddexvyddrhrslpaprzglkmltciyjhmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089258.6242387-512-127516284131128/AnsiballZ_service_facts.py'
Jan 22 13:40:59 compute-2 sudo[100053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:40:59 compute-2 ceph-mon[77081]: pgmap v377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:40:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:40:59 compute-2 python3.9[100055]: ansible-service_facts Invoked
Jan 22 13:40:59 compute-2 network[100072]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:40:59 compute-2 network[100073]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:40:59 compute-2 network[100074]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:40:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:40:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:40:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:59.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:00 compute-2 sudo[100080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:00 compute-2 sudo[100080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-2 sudo[100080]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:00 compute-2 sudo[100106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:41:00 compute-2 sudo[100106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-2 sudo[100106]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:00.216+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:00 compute-2 sudo[100134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:00 compute-2 sudo[100134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-2 sudo[100134]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:00 compute-2 sudo[100162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:41:00 compute-2 sudo[100162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:00.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:00 compute-2 podman[100280]: 2026-01-22 13:41:00.723071625 +0000 UTC m=+0.064043926 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:41:00 compute-2 podman[100280]: 2026-01-22 13:41:00.829651574 +0000 UTC m=+0.170623875 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 13:41:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:01.198+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:01 compute-2 podman[100462]: 2026-01-22 13:41:01.358274778 +0000 UTC m=+0.045658041 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:41:01 compute-2 podman[100462]: 2026-01-22 13:41:01.393716662 +0000 UTC m=+0.081099895 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:41:01 compute-2 ceph-mon[77081]: pgmap v378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:01 compute-2 podman[100542]: 2026-01-22 13:41:01.576661148 +0000 UTC m=+0.051435616 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 13:41:01 compute-2 podman[100542]: 2026-01-22 13:41:01.588692982 +0000 UTC m=+0.063467470 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.tags=Ceph keepalived, release=1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20)
Jan 22 13:41:01 compute-2 sudo[100162]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-2 sudo[100578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:01 compute-2 sudo[100578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-2 sudo[100578]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-2 sudo[100607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:41:01 compute-2 sudo[100607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-2 sudo[100607]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-2 sudo[100635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:01 compute-2 sudo[100635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-2 sudo[100635]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-2 sudo[100663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:41:01 compute-2 sudo[100663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:01 compute-2 sudo[100053]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:01.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:02.238+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:02 compute-2 sudo[100663]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:41:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:41:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:03.229+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:03 compute-2 ceph-mon[77081]: pgmap v379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:03 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:41:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:41:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:41:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:41:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:41:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:03 compute-2 sudo[100896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjvrxmxcddwkpxqlqpjpqixvwrzzarn ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769089263.6226814-557-272747155536081/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769089263.6226814-557-272747155536081/args'
Jan 22 13:41:03 compute-2 sudo[100896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:04 compute-2 sudo[100896]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:04.190+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:04.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:04 compute-2 sudo[101064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drcaytcftmmodqplmddrgrwduljvmysy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089264.5533364-590-252438383409701/AnsiballZ_dnf.py'
Jan 22 13:41:04 compute-2 sudo[101064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:05 compute-2 python3.9[101066]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:41:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:05.205+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:05 compute-2 ceph-mon[77081]: pgmap v380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:05.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:06.219+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:06 compute-2 sudo[101064]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:06.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:06 compute-2 ceph-mon[77081]: pgmap v381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:07.188+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:07 compute-2 sudo[101218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skgzcirqrkkoabmbmlfqjweylixpridu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089267.0729914-629-229258441715391/AnsiballZ_package_facts.py'
Jan 22 13:41:07 compute-2 sudo[101218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:41:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:41:08 compute-2 sudo[101221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:08 compute-2 sudo[101221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:08 compute-2 sudo[101221]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:08 compute-2 python3.9[101220]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 13:41:08 compute-2 sudo[101246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:08 compute-2 sudo[101246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:08 compute-2 sudo[101246]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:08.157+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:08 compute-2 sudo[101218]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:08.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:08 compute-2 ceph-mon[77081]: pgmap v382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:09.206+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:09 compute-2 sudo[101421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klkcilbnzcbwgisrqhfluokmepppcuzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089269.0160806-659-119009090882519/AnsiballZ_stat.py'
Jan 22 13:41:09 compute-2 sudo[101421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:09 compute-2 python3.9[101423]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:09 compute-2 sudo[101421]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:09 compute-2 sudo[101499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biejfarafbhtcqmejkrgutqxwklgavgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089269.0160806-659-119009090882519/AnsiballZ_file.py'
Jan 22 13:41:09 compute-2 sudo[101499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:41:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:09 compute-2 python3.9[101501]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:09 compute-2 sudo[101499]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:10 compute-2 sudo[101502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:10 compute-2 sudo[101502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:10 compute-2 sudo[101502]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:10.172+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:10 compute-2 sudo[101551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:41:10 compute-2 sudo[101551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:10 compute-2 sudo[101551]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:10 compute-2 sudo[101702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pezqzxzhrvxfcxrnnezmnevbokvdgkph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089270.5206952-698-126305092338571/AnsiballZ_stat.py'
Jan 22 13:41:10 compute-2 sudo[101702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:10 compute-2 ceph-mon[77081]: pgmap v383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:11 compute-2 python3.9[101704]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:11 compute-2 sudo[101702]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:11.144+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:11 compute-2 sudo[101780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnksalhaqcbbryzhjulhnadqvigrsuwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089270.5206952-698-126305092338571/AnsiballZ_file.py'
Jan 22 13:41:11 compute-2 sudo[101780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:11 compute-2 python3.9[101782]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:11 compute-2 sudo[101780]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:12.144+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:12.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:12 compute-2 ceph-mon[77081]: pgmap v384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:13 compute-2 sudo[101933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilynpntvkycrkfpwwiyqwwlyupolceco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089272.6541948-749-233930423864456/AnsiballZ_lineinfile.py'
Jan 22 13:41:13 compute-2 sudo[101933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:13.112+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:13 compute-2 python3.9[101935]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:13 compute-2 sudo[101933]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:14.146+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:14 compute-2 sudo[102086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olflkwboynbqgpinhgjgjzlglnponlad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089274.6065001-795-158808632976556/AnsiballZ_setup.py'
Jan 22 13:41:14 compute-2 sudo[102086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:15.132+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:15 compute-2 python3.9[102088]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:41:15 compute-2 sudo[102086]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:15 compute-2 ceph-mon[77081]: pgmap v385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:15.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:16 compute-2 sudo[102170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-endyjgiupwkeioifqvbjczbfnjjzhekc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089274.6065001-795-158808632976556/AnsiballZ_systemd.py'
Jan 22 13:41:16 compute-2 sudo[102170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:16.163+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:16 compute-2 python3.9[102172]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:41:16 compute-2 sudo[102170]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:17.175+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:17 compute-2 sshd-session[96953]: Connection closed by 192.168.122.30 port 50508
Jan 22 13:41:17 compute-2 sshd-session[96950]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:41:17 compute-2 systemd[1]: session-37.scope: Deactivated successfully.
Jan 22 13:41:17 compute-2 systemd[1]: session-37.scope: Consumed 23.312s CPU time.
Jan 22 13:41:17 compute-2 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Jan 22 13:41:17 compute-2 systemd-logind[787]: Removed session 37.
Jan 22 13:41:17 compute-2 ceph-mon[77081]: pgmap v386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:17.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:18.168+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:18.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:18 compute-2 ceph-mon[77081]: pgmap v387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:19.188+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:19.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:20.203+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:20.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:20 compute-2 ceph-mon[77081]: pgmap v388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:21.244+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:22.209+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:22 compute-2 sshd-session[102203]: Accepted publickey for zuul from 192.168.122.30 port 50940 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:41:22 compute-2 systemd-logind[787]: New session 38 of user zuul.
Jan 22 13:41:22 compute-2 systemd[1]: Started Session 38 of User zuul.
Jan 22 13:41:22 compute-2 sshd-session[102203]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:41:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:23.237+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:23 compute-2 sudo[102356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwgjfuiajyfcjbvcgcuzyzuihjreiba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089282.9964368-29-164303212258230/AnsiballZ_file.py'
Jan 22 13:41:23 compute-2 sudo[102356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:23 compute-2 python3.9[102358]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:23 compute-2 ceph-mon[77081]: pgmap v389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:23 compute-2 sudo[102356]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:23.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:24.237+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:24 compute-2 sudo[102509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wusmalguxrwcvstmakpphrqiucnmhbnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089283.8776276-65-260921202635268/AnsiballZ_stat.py'
Jan 22 13:41:24 compute-2 sudo[102509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:24 compute-2 python3.9[102511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:24 compute-2 sudo[102509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:24 compute-2 ceph-mon[77081]: pgmap v390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:24 compute-2 sudo[102587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbjsferjlcrvgohalkplgxqnxtznqopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089283.8776276-65-260921202635268/AnsiballZ_file.py'
Jan 22 13:41:24 compute-2 sudo[102587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:24 compute-2 python3.9[102589]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:24 compute-2 sudo[102587]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:25.285+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:25 compute-2 sshd-session[102206]: Connection closed by 192.168.122.30 port 50940
Jan 22 13:41:25 compute-2 sshd-session[102203]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:41:25 compute-2 systemd[1]: session-38.scope: Deactivated successfully.
Jan 22 13:41:25 compute-2 systemd[1]: session-38.scope: Consumed 1.285s CPU time.
Jan 22 13:41:25 compute-2 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Jan 22 13:41:25 compute-2 systemd-logind[787]: Removed session 38.
Jan 22 13:41:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:26.305+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:26 compute-2 ceph-mon[77081]: pgmap v391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:27.298+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:28 compute-2 sudo[102616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:28 compute-2 sudo[102616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:28 compute-2 sudo[102616]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:28 compute-2 sudo[102641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:28 compute-2 sudo[102641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:28 compute-2 sudo[102641]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:28.311+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:28.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:29.327+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:29 compute-2 ceph-mon[77081]: pgmap v392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:30.326+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:31 compute-2 sshd-session[102668]: Accepted publickey for zuul from 192.168.122.30 port 50308 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:41:31 compute-2 systemd-logind[787]: New session 39 of user zuul.
Jan 22 13:41:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:31.331+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:31 compute-2 systemd[1]: Started Session 39 of User zuul.
Jan 22 13:41:31 compute-2 sshd-session[102668]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:41:31 compute-2 ceph-mon[77081]: pgmap v393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:32.379+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:32 compute-2 python3.9[102821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:41:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:33.346+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:33 compute-2 ceph-mon[77081]: pgmap v394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:33 compute-2 sudo[102976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwnvhpigcelkcctizovfayizfmndoge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089293.2178252-62-124530258401284/AnsiballZ_file.py'
Jan 22 13:41:33 compute-2 sudo[102976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:33 compute-2 python3.9[102978]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:33 compute-2 sudo[102976]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:34.353+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:34 compute-2 sudo[103152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqvjrgexycppgxninvlvfzhfmxsunun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089294.1965935-86-60737976237952/AnsiballZ_stat.py'
Jan 22 13:41:34 compute-2 sudo[103152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:34 compute-2 python3.9[103154]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:34 compute-2 sudo[103152]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:35 compute-2 sudo[103230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-choqbvgkmxqoflybbxfbpxzulqpzdaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089294.1965935-86-60737976237952/AnsiballZ_file.py'
Jan 22 13:41:35 compute-2 sudo[103230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:35 compute-2 python3.9[103232]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.73yt8_6b recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:35.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:35 compute-2 sudo[103230]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:35 compute-2 ceph-mon[77081]: pgmap v395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:36.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:36.257+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:36 compute-2 sudo[103383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylxmkfegdtcwfrozbueigvgzqkkygefw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089296.2640133-147-182912440539622/AnsiballZ_stat.py'
Jan 22 13:41:36 compute-2 sudo[103383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:36.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:36 compute-2 python3.9[103385]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:36 compute-2 sudo[103383]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:36 compute-2 ceph-mon[77081]: pgmap v396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:37 compute-2 sudo[103461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amitzwpakgqcgwdwfgerxaogrjtgstnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089296.2640133-147-182912440539622/AnsiballZ_file.py'
Jan 22 13:41:37 compute-2 sudo[103461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:37.259+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:37 compute-2 python3.9[103463]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.6j1ftkho recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:37 compute-2 sudo[103461]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:37 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:37 compute-2 sudo[103613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osfdstlmfaokwqkwzttbjjnyntwcjfrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089297.659467-186-180876071543549/AnsiballZ_file.py'
Jan 22 13:41:37 compute-2 sudo[103613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:41:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:41:38 compute-2 python3.9[103615]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:41:38 compute-2 sudo[103613]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:38.245+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:38 compute-2 sudo[103766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mazimdtmpogairttunckzmkdtufhssgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089298.411048-209-175758125399309/AnsiballZ_stat.py'
Jan 22 13:41:38 compute-2 sudo[103766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:38 compute-2 python3.9[103768]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:38 compute-2 sudo[103766]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:38 compute-2 ceph-mon[77081]: pgmap v397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:39 compute-2 sudo[103844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlegdpwgkmbgfchiwahgjeecgnmlijta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089298.411048-209-175758125399309/AnsiballZ_file.py'
Jan 22 13:41:39 compute-2 sudo[103844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:39.204+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:39 compute-2 python3.9[103846]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:41:39 compute-2 sudo[103844]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:39 compute-2 sudo[103996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzcbwcqcpwdkjnrwqbswgjqjqwscyvro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089299.4968574-209-216815716337986/AnsiballZ_stat.py'
Jan 22 13:41:39 compute-2 sudo[103996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:40 compute-2 python3.9[103998]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:40.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:40 compute-2 sudo[103996]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:40.179+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:40 compute-2 sudo[104075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcexmeelyqsvzsyrqnphguyngshlohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089299.4968574-209-216815716337986/AnsiballZ_file.py'
Jan 22 13:41:40 compute-2 sudo[104075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:41:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:40.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:41:40 compute-2 python3.9[104077]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:41:40 compute-2 sudo[104075]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:41.211+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:41 compute-2 sudo[104227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogndkjjthunuzueidlbgqfegthranrko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089301.0108485-279-184051974898074/AnsiballZ_file.py'
Jan 22 13:41:41 compute-2 sudo[104227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:41 compute-2 ceph-mon[77081]: pgmap v398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:41 compute-2 python3.9[104229]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:41 compute-2 sudo[104227]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:42.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:42.188+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:42 compute-2 sudo[104379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrccudzpknoahxltshyvoqiivvmoloru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089301.8687325-303-193586089595531/AnsiballZ_stat.py'
Jan 22 13:41:42 compute-2 sudo[104379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:42 compute-2 python3.9[104381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:42 compute-2 sudo[104379]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:42.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:42 compute-2 sudo[104458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzicykgbljcxymyecnjhtvhmcafoqudh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089301.8687325-303-193586089595531/AnsiballZ_file.py'
Jan 22 13:41:42 compute-2 sudo[104458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:42 compute-2 python3.9[104460]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:42 compute-2 sudo[104458]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:43.236+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:43 compute-2 ceph-mon[77081]: pgmap v399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:43 compute-2 sudo[104610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqhzykjklizhefnvdfnbpwabdiuppavw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089303.3385975-339-115362974534305/AnsiballZ_stat.py'
Jan 22 13:41:43 compute-2 sudo[104610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:43 compute-2 python3.9[104612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:43 compute-2 sudo[104610]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:44.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:44 compute-2 sudo[104688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxjjvwcxnlihsvtvpqndodppnodllbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089303.3385975-339-115362974534305/AnsiballZ_file.py'
Jan 22 13:41:44 compute-2 sudo[104688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:44 compute-2 python3.9[104690]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:44.235+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:44 compute-2 sudo[104688]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:45.189+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:45 compute-2 sudo[104841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwcpwfhjdstijcjdlqbitlgewvvqjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089304.77684-374-75447184937400/AnsiballZ_systemd.py'
Jan 22 13:41:45 compute-2 sudo[104841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:45 compute-2 ceph-mon[77081]: pgmap v400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:45 compute-2 python3.9[104843]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:41:45 compute-2 systemd[1]: Reloading.
Jan 22 13:41:45 compute-2 systemd-rc-local-generator[104869]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:41:45 compute-2 systemd-sysv-generator[104873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:41:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:41:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:46.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:41:46 compute-2 sudo[104841]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:46.210+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:46.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:46 compute-2 sudo[105031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udrykliobqbipnpsxvdcovlpairbjqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089306.4927804-399-23529229043646/AnsiballZ_stat.py'
Jan 22 13:41:46 compute-2 sudo[105031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:46 compute-2 python3.9[105033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:46 compute-2 sudo[105031]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:47 compute-2 sudo[105109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukuhlsavrllxpizpfyipqcypcolslfqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089306.4927804-399-23529229043646/AnsiballZ_file.py'
Jan 22 13:41:47 compute-2 sudo[105109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:47.231+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:47 compute-2 python3.9[105111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:47 compute-2 sudo[105109]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:47 compute-2 ceph-mon[77081]: pgmap v401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:47 compute-2 sudo[105261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxlaipeysygwgqchdecernchgvrqrqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089307.6611207-434-169251042342016/AnsiballZ_stat.py'
Jan 22 13:41:47 compute-2 sudo[105261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:48.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:48 compute-2 python3.9[105263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:48.205+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:48 compute-2 sudo[105261]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:48 compute-2 sudo[105267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:48 compute-2 sudo[105267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:48 compute-2 sudo[105267]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:48 compute-2 sudo[105292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:41:48 compute-2 sudo[105292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:41:48 compute-2 sudo[105292]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:48.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:48 compute-2 sudo[105390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lygjkmldgkwhamhsbyzoqumckglgttsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089307.6611207-434-169251042342016/AnsiballZ_file.py'
Jan 22 13:41:48 compute-2 sudo[105390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:48 compute-2 ceph-mon[77081]: pgmap v402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:48 compute-2 python3.9[105392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:48 compute-2 sudo[105390]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:49.226+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:49 compute-2 sudo[105542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljsozsbiadcgibvqrqewmywoadpzluoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089309.1068292-470-182939470395418/AnsiballZ_systemd.py'
Jan 22 13:41:49 compute-2 sudo[105542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:49 compute-2 python3.9[105544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:41:49 compute-2 systemd[1]: Reloading.
Jan 22 13:41:49 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:49 compute-2 systemd-rc-local-generator[105573]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:41:49 compute-2 systemd-sysv-generator[105576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:41:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:50 compute-2 systemd[1]: Starting Create netns directory...
Jan 22 13:41:50 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:41:50 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:41:50 compute-2 systemd[1]: Finished Create netns directory.
Jan 22 13:41:50 compute-2 sudo[105542]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:50.219+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:50 compute-2 ceph-mon[77081]: pgmap v403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:51 compute-2 python3.9[105737]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:41:51 compute-2 network[105754]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:41:51 compute-2 network[105755]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:41:51 compute-2 network[105756]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:41:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:51.202+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:41:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:52.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:41:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:52.199+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:41:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:41:52 compute-2 ceph-mon[77081]: pgmap v404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:53.208+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:54.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:54.183+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:54 compute-2 ceph-mon[77081]: pgmap v405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:55.205+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:56.193+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:41:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:57.154+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:57 compute-2 ceph-mon[77081]: pgmap v406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:57 compute-2 sudo[106019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdxidccsetkgvxinzchddmkzsaohjmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089317.5487885-549-40845453324848/AnsiballZ_stat.py'
Jan 22 13:41:57 compute-2 sudo[106019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:58 compute-2 python3.9[106021]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:41:58 compute-2 sudo[106019]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:58.162+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:41:58 compute-2 sudo[106097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmknnzmxrintlbjemcxpcamunmkzqpfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089317.5487885-549-40845453324848/AnsiballZ_file.py'
Jan 22 13:41:58 compute-2 sudo[106097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:58 compute-2 python3.9[106099]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:58 compute-2 sudo[106097]: pam_unix(sudo:session): session closed for user root
Jan 22 13:41:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:41:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:41:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:41:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:59.135+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:41:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:59 compute-2 sudo[106250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeutepynbxmrzdtxifhbgfflnoiezso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089318.9572616-588-263598049664318/AnsiballZ_file.py'
Jan 22 13:41:59 compute-2 sudo[106250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:41:59 compute-2 ceph-mon[77081]: pgmap v407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:41:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:41:59 compute-2 python3.9[106252]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:41:59 compute-2 sudo[106250]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:00.138+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:00 compute-2 sudo[106403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hebukncyfjcmuzmiccmzabbegfigtbno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089319.9775457-612-77517337944488/AnsiballZ_stat.py'
Jan 22 13:42:00 compute-2 sudo[106403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:00 compute-2 python3.9[106405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:00 compute-2 sudo[106403]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:00 compute-2 ceph-mon[77081]: pgmap v408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:42:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:42:00 compute-2 sudo[106481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygikwbxoiopyqjtsavtoptnmrhnuumdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089319.9775457-612-77517337944488/AnsiballZ_file.py'
Jan 22 13:42:00 compute-2 sudo[106481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:00 compute-2 python3.9[106483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:01 compute-2 sudo[106481]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:01.161+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:02.150+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:02 compute-2 sudo[106633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkkjsjlllkvtgxejvtlmaddhkpvkmul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089321.712923-656-65315456368435/AnsiballZ_timezone.py'
Jan 22 13:42:02 compute-2 sudo[106633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:02 compute-2 python3.9[106635]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 13:42:02 compute-2 systemd[1]: Starting Time & Date Service...
Jan 22 13:42:02 compute-2 systemd[1]: Started Time & Date Service.
Jan 22 13:42:02 compute-2 sudo[106633]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:02 compute-2 ceph-mon[77081]: pgmap v409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:03.159+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:03 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:03 compute-2 sudo[106790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfimuernbklfgdtdmtbhruvrirzvtijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089323.3992434-684-196617849757000/AnsiballZ_file.py'
Jan 22 13:42:03 compute-2 sudo[106790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:03 compute-2 python3.9[106792]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:03 compute-2 sudo[106790]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:04.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:04.174+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:04 compute-2 sudo[106943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiqntgnfhgnohcuyqujitwtsjspxnoib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089324.1770535-709-233759090573472/AnsiballZ_stat.py'
Jan 22 13:42:04 compute-2 sudo[106943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:04 compute-2 ceph-mon[77081]: pgmap v410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:04 compute-2 python3.9[106945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:04 compute-2 sudo[106943]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:05 compute-2 sudo[107021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhtrnhlpyzwdvrwprnczbicomjkdxwde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089324.1770535-709-233759090573472/AnsiballZ_file.py'
Jan 22 13:42:05 compute-2 sudo[107021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:05.221+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:05 compute-2 python3.9[107023]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:05 compute-2 sudo[107021]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:05 compute-2 sudo[107173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsqzncrpvoaitgupfgobihwumbtlvde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089325.6190493-744-205922381865105/AnsiballZ_stat.py'
Jan 22 13:42:05 compute-2 sudo[107173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:06 compute-2 python3.9[107175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:06 compute-2 sudo[107173]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:06.175+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:06 compute-2 sudo[107252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxijdtbbrqdnqpbetofuwivdvkezkbck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089325.6190493-744-205922381865105/AnsiballZ_file.py'
Jan 22 13:42:06 compute-2 sudo[107252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:06 compute-2 python3.9[107254]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yknvatdw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:06 compute-2 sudo[107252]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:06 compute-2 ceph-mon[77081]: pgmap v411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:07.133+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:07 compute-2 sudo[107404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puvfyqkzzdsuetpxaiaoolaebmbpcowm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089327.1028628-780-102837220909176/AnsiballZ_stat.py'
Jan 22 13:42:07 compute-2 sudo[107404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:07 compute-2 python3.9[107406]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:07 compute-2 sudo[107404]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:07 compute-2 sudo[107482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycshhqywypzroyahyzfjezfwptygipzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089327.1028628-780-102837220909176/AnsiballZ_file.py'
Jan 22 13:42:07 compute-2 sudo[107482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:08 compute-2 python3.9[107484]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:08 compute-2 sudo[107482]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:08.092+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:08 compute-2 sudo[107562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:08 compute-2 sudo[107562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:08 compute-2 sudo[107562]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:08 compute-2 sudo[107587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:08 compute-2 sudo[107587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:08 compute-2 sudo[107587]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:42:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:42:08 compute-2 sudo[107685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-harvbqcbvwrzjrhiguzuvlmqkpolflzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089328.423992-819-138037206729922/AnsiballZ_command.py'
Jan 22 13:42:08 compute-2 sudo[107685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:08 compute-2 ceph-mon[77081]: pgmap v412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:09 compute-2 python3.9[107687]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:09 compute-2 sudo[107685]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:09.092+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:09 compute-2 sudo[107838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmzawlhnvazoluprmamcbkgzlrphouqk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089329.447813-843-106471056706747/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:42:09 compute-2 sudo[107838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:10.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:10.072+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:10 compute-2 python3[107840]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:42:10 compute-2 sudo[107838]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-2 sudo[107841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:10 compute-2 sudo[107841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-2 sudo[107841]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-2 sudo[107890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:42:10 compute-2 sudo[107890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-2 sudo[107890]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-2 sudo[107916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:10 compute-2 sudo[107916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-2 sudo[107916]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:10 compute-2 sudo[107942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:42:10 compute-2 sudo[107942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:42:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:42:10 compute-2 ceph-mon[77081]: pgmap v413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:10 compute-2 podman[108114]: 2026-01-22 13:42:10.914693678 +0000 UTC m=+0.056824532 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:42:10 compute-2 sudo[108183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcmtqlvpilxootyaiyulgweaybyxaah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089330.480301-867-118720975988345/AnsiballZ_stat.py'
Jan 22 13:42:10 compute-2 sudo[108183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:11 compute-2 podman[108114]: 2026-01-22 13:42:11.019493483 +0000 UTC m=+0.161624337 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 13:42:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:11.084+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:11 compute-2 python3.9[108185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:11 compute-2 sudo[108183]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-2 sudo[108353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxpxlolnisvdwrhnrxokvvffhzjxpfxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089330.480301-867-118720975988345/AnsiballZ_file.py'
Jan 22 13:42:11 compute-2 sudo[108353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:11 compute-2 python3.9[108367]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:11 compute-2 sudo[108353]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:11 compute-2 podman[108400]: 2026-01-22 13:42:11.733367799 +0000 UTC m=+0.061851576 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:42:11 compute-2 podman[108400]: 2026-01-22 13:42:11.742449412 +0000 UTC m=+0.070933189 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:42:11 compute-2 podman[108493]: 2026-01-22 13:42:11.988013993 +0000 UTC m=+0.059671208 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20)
Jan 22 13:42:12 compute-2 podman[108493]: 2026-01-22 13:42:12.001689569 +0000 UTC m=+0.073346774 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 13:42:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:12.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:12 compute-2 sudo[107942]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:12.073+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:12 compute-2 sudo[108580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:12 compute-2 sudo[108580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:12 compute-2 sudo[108580]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:12 compute-2 sudo[108628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:42:12 compute-2 sudo[108628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:12 compute-2 sudo[108628]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:12 compute-2 sudo[108678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:12 compute-2 sudo[108678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:12 compute-2 sudo[108678]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:12 compute-2 sudo[108728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzximonjyydjqimwwswzqwrkxvyudzzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089332.0290365-903-133623386870668/AnsiballZ_stat.py'
Jan 22 13:42:12 compute-2 sudo[108728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:12 compute-2 sudo[108732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:42:12 compute-2 sudo[108732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:12 compute-2 python3.9[108731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:12 compute-2 sudo[108728]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:12 compute-2 sudo[108732]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:13.044+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:13 compute-2 sudo[108911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwauugwbsxfujhjgpixkjqxelwemcjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089332.0290365-903-133623386870668/AnsiballZ_copy.py'
Jan 22 13:42:13 compute-2 sudo[108911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:13 compute-2 python3.9[108913]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089332.0290365-903-133623386870668/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:13 compute-2 sudo[108911]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:14.002+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:14.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:14 compute-2 ceph-mon[77081]: pgmap v414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:42:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:42:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:42:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:42:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:42:14 compute-2 sudo[109063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hltuwobldfbrloqzvoqubxeapvrojckj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089333.9151435-947-45328102815728/AnsiballZ_stat.py'
Jan 22 13:42:14 compute-2 sudo[109063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:14 compute-2 python3.9[109066]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:14 compute-2 sudo[109063]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:15.044+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:15 compute-2 sudo[109142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucqkjnndncfnukhlvvqhtkiwhakpjrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089333.9151435-947-45328102815728/AnsiballZ_file.py'
Jan 22 13:42:15 compute-2 sudo[109142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:15 compute-2 python3.9[109144]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:15 compute-2 sudo[109142]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:15 compute-2 ceph-mon[77081]: pgmap v415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:15 compute-2 sudo[109294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rprdvtscgpyvsdhiivxxwocieotiokqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089335.6405842-984-247573696600800/AnsiballZ_stat.py'
Jan 22 13:42:16 compute-2 sudo[109294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:16.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:16.088+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:16 compute-2 python3.9[109296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:16 compute-2 sudo[109294]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.388688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336388737, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2593, "num_deletes": 251, "total_data_size": 5257530, "memory_usage": 5338304, "flush_reason": "Manual Compaction"}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336419632, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3384523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7365, "largest_seqno": 9953, "table_properties": {"data_size": 3374668, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25170, "raw_average_key_size": 21, "raw_value_size": 3352581, "raw_average_value_size": 2826, "num_data_blocks": 255, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089177, "oldest_key_time": 1769089177, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 30999 microseconds, and 7093 cpu microseconds.
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.419690) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3384523 bytes OK
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.419715) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421550) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421569) EVENT_LOG_v1 {"time_micros": 1769089336421564, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421591) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5245572, prev total WAL file size 5245572, number of live WAL files 2.
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.422938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3305KB)], [15(8589KB)]
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336423028, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12179698, "oldest_snapshot_seqno": -1}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4243 keys, 10523668 bytes, temperature: kUnknown
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336501515, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 10523668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10489352, "index_size": 22622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103721, "raw_average_key_size": 24, "raw_value_size": 10406610, "raw_average_value_size": 2452, "num_data_blocks": 980, "num_entries": 4243, "num_filter_entries": 4243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.501812) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10523668 bytes
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.503353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.0 rd, 133.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 4766, records dropped: 523 output_compression: NoCompression
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.503371) EVENT_LOG_v1 {"time_micros": 1769089336503362, "job": 6, "event": "compaction_finished", "compaction_time_micros": 78575, "compaction_time_cpu_micros": 24274, "output_level": 6, "num_output_files": 1, "total_output_size": 10523668, "num_input_records": 4766, "num_output_records": 4243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336504150, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336505814, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.422810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:16 compute-2 sudo[109373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqljcsbnaxzcfkbjtsypnfcumirnlslk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089335.6405842-984-247573696600800/AnsiballZ_file.py'
Jan 22 13:42:16 compute-2 sudo[109373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:16 compute-2 python3.9[109375]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:16 compute-2 sudo[109373]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:17.122+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:17 compute-2 sudo[109525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexbngfnhbckiwdnzydikvooocixajrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089337.1133926-1020-32163231141240/AnsiballZ_stat.py'
Jan 22 13:42:17 compute-2 sudo[109525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:17 compute-2 ceph-mon[77081]: pgmap v416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:17 compute-2 python3.9[109527]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:17 compute-2 sudo[109525]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:17 compute-2 sudo[109603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomqxxyoyiukmsecqdjlprqfcicthxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089337.1133926-1020-32163231141240/AnsiballZ_file.py'
Jan 22 13:42:17 compute-2 sudo[109603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:18.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:18.079+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:18 compute-2 python3.9[109605]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:18 compute-2 sudo[109603]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:18 compute-2 ceph-mon[77081]: pgmap v417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:19.062+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:19 compute-2 sudo[109756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiyzsurlvxymiphthdexmbydsyxiirrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089338.8098261-1059-79878119112149/AnsiballZ_command.py'
Jan 22 13:42:19 compute-2 sudo[109756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:19 compute-2 python3.9[109758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:19 compute-2 sudo[109756]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:20.020+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:20.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:20 compute-2 sudo[109911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojxcueqmjiqbglfssrlhdyoqlbhugcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089339.747391-1083-255669096301099/AnsiballZ_blockinfile.py'
Jan 22 13:42:20 compute-2 sudo[109911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:20 compute-2 python3.9[109913]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:20 compute-2 sudo[109911]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:20.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:21 compute-2 sudo[110038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:21.049+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:21 compute-2 sudo[110038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:21 compute-2 sudo[110038]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:21 compute-2 sudo[110088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkwfzijjqmeisdjndfjpwfprjgpjzjej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089340.7793708-1110-9027895315874/AnsiballZ_file.py'
Jan 22 13:42:21 compute-2 sudo[110088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:21 compute-2 sudo[110092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:42:21 compute-2 sudo[110092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:21 compute-2 sudo[110092]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:21 compute-2 ceph-mon[77081]: pgmap v418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:42:21 compute-2 python3.9[110091]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:21 compute-2 sudo[110088]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:21 compute-2 sudo[110266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtyadmjxmkxmfctnoapbbgjnxbmichhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089341.5077496-1110-53961065334754/AnsiballZ_file.py'
Jan 22 13:42:21 compute-2 sudo[110266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:21 compute-2 python3.9[110268]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:21 compute-2 sudo[110266]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:22.048+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:42:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:42:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:22.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:22 compute-2 sudo[110419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqclodehbgkmweaokpjvqkzuwyhthhbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089342.470646-1154-281276417194987/AnsiballZ_mount.py'
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.777916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342778025, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 339, "num_deletes": 250, "total_data_size": 247438, "memory_usage": 253504, "flush_reason": "Manual Compaction"}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 22 13:42:22 compute-2 sudo[110419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342781432, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 162793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9958, "largest_seqno": 10292, "table_properties": {"data_size": 160615, "index_size": 342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5684, "raw_average_key_size": 19, "raw_value_size": 156292, "raw_average_value_size": 533, "num_data_blocks": 14, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089337, "oldest_key_time": 1769089337, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 3541 microseconds, and 1023 cpu microseconds.
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.781461) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 162793 bytes OK
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.781476) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783252) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783266) EVENT_LOG_v1 {"time_micros": 1769089342783262, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 245067, prev total WAL file size 245067, number of live WAL files 2.
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(158KB)], [18(10MB)]
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342783620, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10686461, "oldest_snapshot_seqno": -1}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4025 keys, 7891879 bytes, temperature: kUnknown
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342843243, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7891879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7862578, "index_size": 18119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99768, "raw_average_key_size": 24, "raw_value_size": 7787090, "raw_average_value_size": 1934, "num_data_blocks": 782, "num_entries": 4025, "num_filter_entries": 4025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.843672) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7891879 bytes
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.845719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 132.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(114.1) write-amplify(48.5) OK, records in: 4536, records dropped: 511 output_compression: NoCompression
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.845767) EVENT_LOG_v1 {"time_micros": 1769089342845745, "job": 8, "event": "compaction_finished", "compaction_time_micros": 59770, "compaction_time_cpu_micros": 17627, "output_level": 6, "num_output_files": 1, "total_output_size": 7891879, "num_input_records": 4536, "num_output_records": 4025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342846210, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342850517, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:42:22 compute-2 python3.9[110421]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:42:22 compute-2 sudo[110419]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:23.030+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:23 compute-2 sudo[110571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrkaizalrlqvbabaftbzrycrjhdttskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089343.1390827-1154-270581449082234/AnsiballZ_mount.py'
Jan 22 13:42:23 compute-2 sudo[110571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:23 compute-2 ceph-mon[77081]: pgmap v419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:23 compute-2 python3.9[110573]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 13:42:23 compute-2 sudo[110571]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:24.032+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:24 compute-2 sshd-session[102671]: Connection closed by 192.168.122.30 port 50308
Jan 22 13:42:24 compute-2 sshd-session[102668]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:42:24 compute-2 systemd[1]: session-39.scope: Deactivated successfully.
Jan 22 13:42:24 compute-2 systemd[1]: session-39.scope: Consumed 29.166s CPU time.
Jan 22 13:42:24 compute-2 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Jan 22 13:42:24 compute-2 systemd-logind[787]: Removed session 39.
Jan 22 13:42:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:25.068+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:25 compute-2 ceph-mon[77081]: pgmap v420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:26.071+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:26.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:26 compute-2 ceph-mon[77081]: pgmap v421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:26.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:27.076+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:28.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:28.082+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:28 compute-2 ceph-mon[77081]: pgmap v422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:28 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:28 compute-2 sudo[110601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:28 compute-2 sudo[110601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:28 compute-2 sudo[110601]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:28 compute-2 sudo[110626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:28 compute-2 sudo[110626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:28 compute-2 sudo[110626]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:29.062+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:30.060+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:31.038+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:31 compute-2 ceph-mon[77081]: pgmap v423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:32.013+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:32 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 13:42:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:32.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:33.012+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:33 compute-2 ceph-mon[77081]: pgmap v424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:34.048+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:34.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:34 compute-2 sshd-session[110655]: Invalid user solv from 92.118.39.95 port 38112
Jan 22 13:42:34 compute-2 sshd-session[110655]: Connection closed by invalid user solv 92.118.39.95 port 38112 [preauth]
Jan 22 13:42:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:35.037+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:35 compute-2 ceph-mon[77081]: pgmap v425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:35 compute-2 sshd-session[110658]: Accepted publickey for zuul from 192.168.122.30 port 54736 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:42:35 compute-2 systemd-logind[787]: New session 40 of user zuul.
Jan 22 13:42:35 compute-2 systemd[1]: Started Session 40 of User zuul.
Jan 22 13:42:35 compute-2 sshd-session[110658]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:42:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:36.045+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:36.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:37.054+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:37 compute-2 ceph-mon[77081]: pgmap v426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:37 compute-2 sudo[110812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqbevauklegfdynfhegnqmsutipuulzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089355.6602516-25-96119768566704/AnsiballZ_tempfile.py'
Jan 22 13:42:37 compute-2 sudo[110812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:37 compute-2 python3.9[110814]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 13:42:37 compute-2 sudo[110812]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:38.042+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:38 compute-2 sudo[110965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwfbhogpetvgnvrzxhktxagxlgpraazd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089358.211772-61-58307626258641/AnsiballZ_stat.py'
Jan 22 13:42:38 compute-2 sudo[110965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:38 compute-2 python3.9[110967]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:42:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:38.994+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:39 compute-2 ceph-mon[77081]: pgmap v427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:39 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:39 compute-2 sudo[110965]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:39.987+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:40 compute-2 sudo[111120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arncaqurmauwcqjuefxhhrryjjvhbnxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089360.138106-85-164140837793909/AnsiballZ_slurp.py'
Jan 22 13:42:40 compute-2 sudo[111120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:42:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:40.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:42:40 compute-2 python3.9[111122]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 22 13:42:40 compute-2 sudo[111120]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:40 compute-2 ceph-mon[77081]: pgmap v428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:40.959+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:41 compute-2 sudo[111272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-totwbkgujiptuvltjymtujehkzvcvvtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089361.1670475-110-213754723382366/AnsiballZ_stat.py'
Jan 22 13:42:41 compute-2 sudo[111272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:41 compute-2 python3.9[111274]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.rtp1qndu follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:42:41 compute-2 sudo[111272]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:41.973+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:42 compute-2 sudo[111397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajmnltteccjazljeegurxfnhkccybng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089361.1670475-110-213754723382366/AnsiballZ_copy.py'
Jan 22 13:42:42 compute-2 sudo[111397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:42 compute-2 python3.9[111399]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.rtp1qndu mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089361.1670475-110-213754723382366/.source.rtp1qndu _original_basename=.8r04hswq follow=False checksum=9893b3bde8503c371031e4467aece9772279f87c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:42 compute-2 sudo[111397]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:42.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:42.938+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:43 compute-2 sudo[111550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyyhlwzbexfgqteltggruqrfxwdfmzvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089362.7104366-155-159945251584047/AnsiballZ_setup.py'
Jan 22 13:42:43 compute-2 sudo[111550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:43 compute-2 ceph-mon[77081]: pgmap v429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:43 compute-2 python3.9[111552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:42:43 compute-2 sudo[111550]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:43.952+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:44.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:44 compute-2 sudo[111703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbujttpodkcqdqvpuqzfmdjbpblhtpim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089364.0775442-179-176370010608972/AnsiballZ_blockinfile.py'
Jan 22 13:42:44 compute-2 sudo[111703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:44 compute-2 python3.9[111706]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=
                                              create=True mode=0644 path=/tmp/ansible.rtp1qndu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:44 compute-2 sudo[111703]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:44.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:45 compute-2 sshd-session[111704]: Invalid user sol from 45.148.10.240 port 36306
Jan 22 13:42:45 compute-2 sshd-session[111704]: Connection closed by invalid user sol 45.148.10.240 port 36306 [preauth]
Jan 22 13:42:45 compute-2 ceph-mon[77081]: pgmap v430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:45 compute-2 sudo[111857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muomaefrpxbnuodznkeqvdnmnepfhyxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089365.3466337-203-203194680656365/AnsiballZ_command.py'
Jan 22 13:42:45 compute-2 sudo[111857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:45.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:45 compute-2 python3.9[111859]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.rtp1qndu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:46 compute-2 sudo[111857]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:46.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:46 compute-2 sudo[112012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olfxptgmctuweshvtapjwjpyxukjhfgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089366.3041816-227-258226585269366/AnsiballZ_file.py'
Jan 22 13:42:46 compute-2 sudo[112012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:46 compute-2 ceph-mon[77081]: pgmap v431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:46.915+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:46 compute-2 python3.9[112014]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.rtp1qndu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:42:46 compute-2 sudo[112012]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:47 compute-2 sshd-session[110661]: Connection closed by 192.168.122.30 port 54736
Jan 22 13:42:47 compute-2 sshd-session[110658]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:42:47 compute-2 systemd[1]: session-40.scope: Deactivated successfully.
Jan 22 13:42:47 compute-2 systemd[1]: session-40.scope: Consumed 4.887s CPU time.
Jan 22 13:42:47 compute-2 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Jan 22 13:42:47 compute-2 systemd-logind[787]: Removed session 40.
Jan 22 13:42:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:47 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:47.924+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:48.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:48.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:48.920+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:48 compute-2 sudo[112040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:48 compute-2 sudo[112040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:48 compute-2 sudo[112040]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:48 compute-2 ceph-mon[77081]: pgmap v432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:49 compute-2 sudo[112065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:42:49 compute-2 sudo[112065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:42:49 compute-2 sudo[112065]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:49.904+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:50.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:50.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:51 compute-2 ceph-mon[77081]: pgmap v433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:51.846+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:52 compute-2 sshd-session[112092]: Accepted publickey for zuul from 192.168.122.30 port 35502 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:42:52 compute-2 systemd-logind[787]: New session 41 of user zuul.
Jan 22 13:42:52 compute-2 systemd[1]: Started Session 41 of User zuul.
Jan 22 13:42:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:52.892+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:52 compute-2 sshd-session[112092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:42:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:53.854+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:53 compute-2 ceph-mon[77081]: pgmap v434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:53 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:53 compute-2 python3.9[112245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:42:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:54.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:54.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:55 compute-2 ceph-mon[77081]: pgmap v435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:55 compute-2 sudo[112401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdaywgrbbfnvnlivsxqdqjtgtyptnyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089374.7749722-58-94330415837871/AnsiballZ_systemd.py'
Jan 22 13:42:55 compute-2 sudo[112401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:55 compute-2 python3.9[112403]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 13:42:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:55.834+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:55 compute-2 sudo[112401]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:56.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:56 compute-2 sudo[112556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byaspwsplkynbrukuofpinmyuezgxurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089376.1821687-83-191234222346054/AnsiballZ_systemd.py'
Jan 22 13:42:56 compute-2 sudo[112556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:42:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:56.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:56 compute-2 python3.9[112558]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:42:56 compute-2 sudo[112556]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:56.824+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:57 compute-2 ceph-mon[77081]: pgmap v436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:57 compute-2 sudo[112709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytrydeqiqlvpwisggbnrxxcdkwjgjvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089377.313568-109-181894700770307/AnsiballZ_command.py'
Jan 22 13:42:57 compute-2 sudo[112709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:57.872+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:57 compute-2 python3.9[112711]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:42:58 compute-2 sudo[112709]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:42:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:58.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:42:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:42:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:58.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:42:58 compute-2 sudo[112863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euielaizhxejrrduqxqinmhwikujoocp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089378.3194726-133-42676488616219/AnsiballZ_stat.py'
Jan 22 13:42:58 compute-2 sudo[112863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:58.921+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:58 compute-2 python3.9[112865]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:42:59 compute-2 sudo[112863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:42:59 compute-2 ceph-mon[77081]: pgmap v437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:42:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:59 compute-2 sudo[113015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledqxthrhlhqtuquisuwdsllsivwzenx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089379.2979178-160-236874274852180/AnsiballZ_file.py'
Jan 22 13:42:59 compute-2 sudo[113015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:42:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:59.907+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:42:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:42:59 compute-2 python3.9[113017]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:00 compute-2 sudo[113015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:00.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:00 compute-2 sshd-session[112095]: Connection closed by 192.168.122.30 port 35502
Jan 22 13:43:00 compute-2 sshd-session[112092]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:43:00 compute-2 systemd[1]: session-41.scope: Deactivated successfully.
Jan 22 13:43:00 compute-2 systemd[1]: session-41.scope: Consumed 3.752s CPU time.
Jan 22 13:43:00 compute-2 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Jan 22 13:43:00 compute-2 systemd-logind[787]: Removed session 41.
Jan 22 13:43:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:00.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:00.933+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:01 compute-2 ceph-mon[77081]: pgmap v438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:01.964+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:02.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:02 compute-2 ceph-mon[77081]: pgmap v439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:02 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:02.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:03.972+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:04.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:04.993+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:05 compute-2 ceph-mon[77081]: pgmap v440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:06.007+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:06 compute-2 sshd-session[113045]: Accepted publickey for zuul from 192.168.122.30 port 37126 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:43:06 compute-2 systemd-logind[787]: New session 42 of user zuul.
Jan 22 13:43:06 compute-2 systemd[1]: Started Session 42 of User zuul.
Jan 22 13:43:06 compute-2 sshd-session[113045]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:43:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:06.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:07.030+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:07 compute-2 python3.9[113199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:43:07 compute-2 ceph-mon[77081]: pgmap v441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:07.980+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:08 compute-2 sudo[113353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndifozqhbaxmtykaybxkeafstuzqavtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089387.77332-64-91141227022139/AnsiballZ_setup.py'
Jan 22 13:43:08 compute-2 sudo[113353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:08 compute-2 python3.9[113355]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:43:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:08 compute-2 sudo[113353]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:09.007+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:09 compute-2 sudo[113412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:09 compute-2 sudo[113412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:09 compute-2 sudo[113412]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:09 compute-2 sudo[113463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfrtwcjuuuywapjmokhultjqpyhswid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089387.77332-64-91141227022139/AnsiballZ_dnf.py'
Jan 22 13:43:09 compute-2 sudo[113463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:09 compute-2 sudo[113464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:09 compute-2 sudo[113464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:09 compute-2 sudo[113464]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:09 compute-2 python3.9[113470]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 13:43:09 compute-2 ceph-mon[77081]: pgmap v442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:09 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:10.000+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:10 compute-2 sudo[113463]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:10.983+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:11 compute-2 ceph-mon[77081]: pgmap v443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:11 compute-2 python3.9[113642]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:43:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:11.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:12.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:12.996+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:13 compute-2 python3.9[113794]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:43:13 compute-2 ceph-mon[77081]: pgmap v444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:13 compute-2 python3.9[113944]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:43:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:14.001+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:14.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:14 compute-2 python3.9[114095]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:43:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:14.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:14.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:15 compute-2 sshd-session[113048]: Connection closed by 192.168.122.30 port 37126
Jan 22 13:43:15 compute-2 sshd-session[113045]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:43:15 compute-2 systemd[1]: session-42.scope: Deactivated successfully.
Jan 22 13:43:15 compute-2 systemd[1]: session-42.scope: Consumed 5.624s CPU time.
Jan 22 13:43:15 compute-2 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Jan 22 13:43:15 compute-2 systemd-logind[787]: Removed session 42.
Jan 22 13:43:15 compute-2 ceph-mon[77081]: pgmap v445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:15.940+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:16.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:16 compute-2 ceph-mon[77081]: pgmap v446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:16.963+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:17.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:18.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:18.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:18 compute-2 ceph-mon[77081]: pgmap v447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:18.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:19.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:20.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:21 compute-2 sudo[114123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:21 compute-2 sudo[114123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-2 sudo[114123]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:21 compute-2 sshd-session[114125]: Accepted publickey for zuul from 192.168.122.30 port 34722 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:43:21 compute-2 systemd-logind[787]: New session 43 of user zuul.
Jan 22 13:43:21 compute-2 systemd[1]: Started Session 43 of User zuul.
Jan 22 13:43:21 compute-2 sudo[114150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:43:21 compute-2 sshd-session[114125]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:43:21 compute-2 sudo[114150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-2 sudo[114150]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:21 compute-2 sudo[114177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:21 compute-2 sudo[114177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-2 sudo[114177]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:21 compute-2 sudo[114221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:43:21 compute-2 sudo[114221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:21.889+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:21 compute-2 podman[114397]: 2026-01-22 13:43:21.931017425 +0000 UTC m=+0.064434184 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:43:22 compute-2 podman[114397]: 2026-01-22 13:43:22.022962257 +0000 UTC m=+0.156379016 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 13:43:22 compute-2 python3.9[114468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:43:22 compute-2 ceph-mon[77081]: pgmap v448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:22 compute-2 podman[114633]: 2026-01-22 13:43:22.736464242 +0000 UTC m=+0.177206643 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:43:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:22 compute-2 podman[114633]: 2026-01-22 13:43:22.747680337 +0000 UTC m=+0.188422708 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:43:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:22.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:22 compute-2 podman[114699]: 2026-01-22 13:43:22.939656601 +0000 UTC m=+0.051402610 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived)
Jan 22 13:43:22 compute-2 podman[114699]: 2026-01-22 13:43:22.98371541 +0000 UTC m=+0.095461439 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git)
Jan 22 13:43:23 compute-2 sudo[114221]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:23 compute-2 sudo[114732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:23 compute-2 sudo[114732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:23 compute-2 sudo[114732]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:23 compute-2 ceph-mon[77081]: pgmap v449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:23 compute-2 sudo[114757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:43:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:23 compute-2 sudo[114757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:23 compute-2 sudo[114757]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:23 compute-2 sudo[114782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:23 compute-2 sudo[114782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:23 compute-2 sudo[114782]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:23 compute-2 sudo[114836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:43:23 compute-2 sudo[114836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:23.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:23 compute-2 sudo[114974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxzmafpeweyhhaazovkwbbiinmwxyhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089403.5446403-113-59373612777723/AnsiballZ_file.py'
Jan 22 13:43:23 compute-2 sudo[114974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:24 compute-2 sudo[114836]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:24 compute-2 python3.9[114978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:24 compute-2 sudo[114974]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:43:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:43:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:43:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:43:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:43:24 compute-2 sudo[115141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccveaukwckdlqzialazypvpwkfelbskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089404.277138-113-16551626838776/AnsiballZ_file.py'
Jan 22 13:43:24 compute-2 sudo[115141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:24.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:24.796+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:24 compute-2 python3.9[115143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:24 compute-2 sudo[115141]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:25 compute-2 sudo[115293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfzzqgzyrgksadyoariitytkjqpyoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089405.023385-154-59949951216217/AnsiballZ_stat.py'
Jan 22 13:43:25 compute-2 sudo[115293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:25 compute-2 ceph-mon[77081]: pgmap v450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:25 compute-2 python3.9[115295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:25 compute-2 sudo[115293]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:25.831+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:26 compute-2 sudo[115416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinesuvfqflcnqpovuubnwrzeaymfwdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089405.023385-154-59949951216217/AnsiballZ_copy.py'
Jan 22 13:43:26 compute-2 sudo[115416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:26 compute-2 python3.9[115418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089405.023385-154-59949951216217/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=63b51bd5f8f7b1595ccb625079ef1c0e74a34cd4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:26 compute-2 sudo[115416]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:26.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:26.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:27 compute-2 sudo[115569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxtylibmjydnehatolrpfchcbwngfykk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089406.7464411-154-56265308602244/AnsiballZ_stat.py'
Jan 22 13:43:27 compute-2 sudo[115569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:27 compute-2 python3.9[115571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:27 compute-2 sudo[115569]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:27 compute-2 ceph-mon[77081]: pgmap v451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:27 compute-2 sudo[115692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjeqbmkkzjvvhehyayvcxgclatepusbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089406.7464411-154-56265308602244/AnsiballZ_copy.py'
Jan 22 13:43:27 compute-2 sudo[115692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:27.845+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:27 compute-2 python3.9[115694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089406.7464411-154-56265308602244/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=cc1c70588824ebebf3437effcc8b7daf397d0332 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:27 compute-2 sudo[115692]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:28 compute-2 sudo[115845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fintozecsopckxgiiumjmcqsoflhbczo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089408.0385644-154-185353800573225/AnsiballZ_stat.py'
Jan 22 13:43:28 compute-2 sudo[115845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:28 compute-2 python3.9[115847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:28 compute-2 sudo[115845]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:28.833+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:28 compute-2 ceph-mon[77081]: pgmap v452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:28 compute-2 sudo[115968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqdxoyylgpxnsrhebujqkrwuqhuwwyfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089408.0385644-154-185353800573225/AnsiballZ_copy.py'
Jan 22 13:43:28 compute-2 sudo[115968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:29 compute-2 python3.9[115970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089408.0385644-154-185353800573225/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=c446a79c9e0c2c4e1866f2c8d564bd6e393bc473 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:29 compute-2 sudo[115968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:29 compute-2 sudo[115995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:29 compute-2 sudo[115995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:29 compute-2 sudo[115995]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:29 compute-2 sudo[116041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:29 compute-2 sudo[116041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:29 compute-2 sudo[116041]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:29 compute-2 sudo[116170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrrhglfbvxrsohgjihfnxywvhuaxgep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089409.3123312-289-135645718367871/AnsiballZ_file.py'
Jan 22 13:43:29 compute-2 sudo[116170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:29 compute-2 python3.9[116172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:29.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:29 compute-2 sudo[116170]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:30 compute-2 sudo[116322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jueqvqevxcquzezsgkkexqfgrzacejud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089409.974603-289-41650392418577/AnsiballZ_file.py'
Jan 22 13:43:30 compute-2 sudo[116322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:30 compute-2 python3.9[116324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:30 compute-2 sudo[116322]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:30.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:30.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:30 compute-2 ceph-mon[77081]: pgmap v453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:30 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:30 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:43:30 compute-2 sudo[116475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhcmsfchfcsciogulaiemqwribmptfne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089410.6217-336-11513084826455/AnsiballZ_stat.py'
Jan 22 13:43:30 compute-2 sudo[116475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:30 compute-2 sudo[116478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:30 compute-2 sudo[116478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:30 compute-2 sudo[116478]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:31 compute-2 sudo[116503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:43:31 compute-2 sudo[116503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:31 compute-2 sudo[116503]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:31 compute-2 python3.9[116477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:31 compute-2 sudo[116475]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:31 compute-2 sudo[116648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdmeohwnupznofurkkfqjetlesigonkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089410.6217-336-11513084826455/AnsiballZ_copy.py'
Jan 22 13:43:31 compute-2 sudo[116648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:31 compute-2 python3.9[116650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089410.6217-336-11513084826455/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=d7ceac7a2a3de5d60ce6109627fc28aa85299752 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:31 compute-2 sudo[116648]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:31.827+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:32 compute-2 sudo[116800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxzecjyicfscovxvitmsaxqsjswwfnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089411.7551444-336-172225582209765/AnsiballZ_stat.py'
Jan 22 13:43:32 compute-2 sudo[116800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:32 compute-2 python3.9[116802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:32 compute-2 sudo[116800]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:32 compute-2 sudo[116924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljcbfnisknhuescnpmigfvgonbeqiovv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089411.7551444-336-172225582209765/AnsiballZ_copy.py'
Jan 22 13:43:32 compute-2 sudo[116924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:32 compute-2 python3.9[116926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089411.7551444-336-172225582209765/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:32 compute-2 sudo[116924]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:32.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:32 compute-2 ceph-mon[77081]: pgmap v454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:33 compute-2 sudo[117076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjxkabiamyaxgdlayomoannpzdwphur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089412.874086-336-206277195598569/AnsiballZ_stat.py'
Jan 22 13:43:33 compute-2 sudo[117076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:33 compute-2 python3.9[117078]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:33 compute-2 sudo[117076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:33 compute-2 sudo[117199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibdqezjthouydwvmnfwknruprxouarts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089412.874086-336-206277195598569/AnsiballZ_copy.py'
Jan 22 13:43:33 compute-2 sudo[117199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:33.858+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:33 compute-2 python3.9[117201]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089412.874086-336-206277195598569/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=dd5d85a06a624929f5f6a9d093c91f37f447db74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:33 compute-2 sudo[117199]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:34 compute-2 sudo[117352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlphwxcmjaqnwuwedxpawilebeejdjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089414.1983545-460-231947691842293/AnsiballZ_file.py'
Jan 22 13:43:34 compute-2 sudo[117352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:34 compute-2 python3.9[117354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:34 compute-2 sudo[117352]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:34.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:35 compute-2 ceph-mon[77081]: pgmap v455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:35 compute-2 sudo[117504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlaxwogpjnzpjbzefjahirnmikjhekbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089414.8789353-460-16141112868673/AnsiballZ_file.py'
Jan 22 13:43:35 compute-2 sudo[117504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:35 compute-2 python3.9[117506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:35 compute-2 sudo[117504]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:35.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:35 compute-2 sudo[117656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvmkltoirxdrknrfxxmikwcsvvvcawqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089415.6172404-508-169552344552263/AnsiballZ_stat.py'
Jan 22 13:43:35 compute-2 sudo[117656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:36 compute-2 python3.9[117658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:36 compute-2 sudo[117656]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:36 compute-2 sudo[117780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imjrwzxzyvtuktzohetwaodzfikievoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089415.6172404-508-169552344552263/AnsiballZ_copy.py'
Jan 22 13:43:36 compute-2 sudo[117780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:36 compute-2 python3.9[117782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089415.6172404-508-169552344552263/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=064b6b2de03bd1b3c0ee9a7de3a1cc7f54c2c8c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:36.828+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:36 compute-2 sudo[117780]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:37 compute-2 sudo[117932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnmitmgwehtyutmkmlfafifeqiqmjhbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089416.9563866-508-230053014081636/AnsiballZ_stat.py'
Jan 22 13:43:37 compute-2 sudo[117932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:37 compute-2 ceph-mon[77081]: pgmap v456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:37 compute-2 python3.9[117934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:37 compute-2 sudo[117932]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:37 compute-2 sudo[118055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guplliqrqffzooocgmthfwacanvulsrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089416.9563866-508-230053014081636/AnsiballZ_copy.py'
Jan 22 13:43:37 compute-2 sudo[118055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:37.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:37 compute-2 python3.9[118057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089416.9563866-508-230053014081636/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:37 compute-2 sudo[118055]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:38 compute-2 sudo[118208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrrosdgpiwkledwqkefcxjlisynaqzny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089418.049029-508-56370431584138/AnsiballZ_stat.py'
Jan 22 13:43:38 compute-2 sudo[118208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:38 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:38 compute-2 python3.9[118210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:38 compute-2 sudo[118208]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:43:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:38.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:43:38 compute-2 sudo[118331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vreeehumsfuuomrntljleagnxkgubntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089418.049029-508-56370431584138/AnsiballZ_copy.py'
Jan 22 13:43:38 compute-2 sudo[118331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:38.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:39 compute-2 python3.9[118333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089418.049029-508-56370431584138/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=0bd5fdf5b338410f4386fce1270ddc78cda35238 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:39 compute-2 sudo[118331]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:39 compute-2 ceph-mon[77081]: pgmap v457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:39.885+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:40 compute-2 sudo[118483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krlowvnajvwedttufrpjxyiartehfiob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089419.7379494-673-94814949771895/AnsiballZ_file.py'
Jan 22 13:43:40 compute-2 sudo[118483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:40 compute-2 python3.9[118485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:40 compute-2 sudo[118483]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:40 compute-2 sudo[118636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlqntetjeyrkruprdpgnrlilyxfqmski ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089420.4309561-705-40875315371661/AnsiballZ_stat.py'
Jan 22 13:43:40 compute-2 sudo[118636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:40 compute-2 python3.9[118638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:40 compute-2 sudo[118636]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:40.912+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:41 compute-2 sudo[118759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhcsvugbfxaasvvbbuqxtyvowkmeeoqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089420.4309561-705-40875315371661/AnsiballZ_copy.py'
Jan 22 13:43:41 compute-2 sudo[118759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:41 compute-2 python3.9[118761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089420.4309561-705-40875315371661/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:41 compute-2 sudo[118759]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:41 compute-2 ceph-mon[77081]: pgmap v458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:41 compute-2 sudo[118911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpwlxookqgfgbwmuwrteildmifhjvgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089421.603759-750-164442234578623/AnsiballZ_file.py'
Jan 22 13:43:41 compute-2 sudo[118911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:41.930+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:42 compute-2 python3.9[118913]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:42 compute-2 sudo[118911]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:42 compute-2 sudo[119064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yunixzqinqrtolcqcwinudvunohltfgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089422.2060344-770-60931968141020/AnsiballZ_stat.py'
Jan 22 13:43:42 compute-2 sudo[119064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:42 compute-2 python3.9[119066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:42 compute-2 sudo[119064]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:42 compute-2 ceph-mon[77081]: pgmap v459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:42.887+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:43 compute-2 sudo[119187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdypkbyrsaqtjnsyeuwjchtuuwfersxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089422.2060344-770-60931968141020/AnsiballZ_copy.py'
Jan 22 13:43:43 compute-2 sudo[119187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:43 compute-2 python3.9[119189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089422.2060344-770-60931968141020/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:43 compute-2 sudo[119187]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:43 compute-2 sudo[119339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwwzsqihyseujfsbumnorzxboijazyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089423.490294-816-165154142311727/AnsiballZ_file.py'
Jan 22 13:43:43 compute-2 sudo[119339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:43 compute-2 python3.9[119341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:43.916+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:43 compute-2 sudo[119339]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:44 compute-2 sudo[119491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbroxsxeiwmhgkwmjgkhglwtqwovdhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089424.0723221-838-1354098801245/AnsiballZ_stat.py'
Jan 22 13:43:44 compute-2 sudo[119491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:44 compute-2 python3.9[119493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:44 compute-2 sudo[119491]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:44 compute-2 sudo[119615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcqeqsqijkgvmhhdapuczaxmmobocpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089424.0723221-838-1354098801245/AnsiballZ_copy.py'
Jan 22 13:43:44 compute-2 sudo[119615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:44 compute-2 ceph-mon[77081]: pgmap v460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:44.910+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:44 compute-2 python3.9[119617]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089424.0723221-838-1354098801245/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:45 compute-2 sudo[119615]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:45 compute-2 sudo[119767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axnonnoktajftajccbwqejpqjrvyyvli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089425.22268-879-165587017013406/AnsiballZ_file.py'
Jan 22 13:43:45 compute-2 sudo[119767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:45 compute-2 python3.9[119769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:45 compute-2 sudo[119767]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:45.880+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:46 compute-2 sudo[119919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejoxrfcnsamkeyaluvbsecjrakfbbox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089425.8454711-902-51465542634224/AnsiballZ_stat.py'
Jan 22 13:43:46 compute-2 sudo[119919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:46 compute-2 python3.9[119921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:46 compute-2 sudo[119919]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:46 compute-2 sudo[120043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbgkqdasmztvtyoikqdksfcuqluyoyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089425.8454711-902-51465542634224/AnsiballZ_copy.py'
Jan 22 13:43:46 compute-2 sudo[120043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:46.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:46 compute-2 python3.9[120045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089425.8454711-902-51465542634224/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:46.862+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:46 compute-2 sudo[120043]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:46 compute-2 ceph-mon[77081]: pgmap v461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:47 compute-2 sudo[120195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nijizpvhyhdhoqfpdkdyhrcevxqmckfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089427.0527663-944-254506513118167/AnsiballZ_file.py'
Jan 22 13:43:47 compute-2 sudo[120195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:47 compute-2 python3.9[120197]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:47 compute-2 sudo[120195]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:47.846+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:47 compute-2 sudo[120347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfmvbahfvwnkfjaktifcmzfaxalhbhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089427.6470435-965-141793958172935/AnsiballZ_stat.py'
Jan 22 13:43:47 compute-2 sudo[120347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:48 compute-2 python3.9[120349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:48 compute-2 sudo[120347]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:48 compute-2 sudo[120471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwpculmrmazvyjsrccwvgkdnacleqedm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089427.6470435-965-141793958172935/AnsiballZ_copy.py'
Jan 22 13:43:48 compute-2 sudo[120471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:48 compute-2 python3.9[120473]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089427.6470435-965-141793958172935/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:48 compute-2 sudo[120471]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:48.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:48.895+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:49 compute-2 ceph-mon[77081]: pgmap v462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:49 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:49 compute-2 sudo[120623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqiunqpgeqfririqruoxsrggxkoqlzqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089428.9464147-1009-169900437767077/AnsiballZ_file.py'
Jan 22 13:43:49 compute-2 sudo[120623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:49 compute-2 sudo[120626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:49 compute-2 sudo[120626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:49 compute-2 sudo[120626]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:49 compute-2 python3.9[120625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:43:49 compute-2 sudo[120623]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:49 compute-2 sudo[120651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:43:49 compute-2 sudo[120651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:43:49 compute-2 sudo[120651]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:49.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:49 compute-2 sudo[120825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyelenpwcjfednknfjsuzzcfwddnztwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089429.617371-1032-76471749655435/AnsiballZ_stat.py'
Jan 22 13:43:49 compute-2 sudo[120825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:50 compute-2 python3.9[120827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:43:50 compute-2 sudo[120825]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:50 compute-2 sudo[120949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhvdqxesgepuffwmqtgslbhvjrmfympd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089429.617371-1032-76471749655435/AnsiballZ_copy.py'
Jan 22 13:43:50 compute-2 sudo[120949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:43:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:50.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:50 compute-2 python3.9[120951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089429.617371-1032-76471749655435/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:43:50 compute-2 sudo[120949]: pam_unix(sudo:session): session closed for user root
Jan 22 13:43:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:50.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:51 compute-2 ceph-mon[77081]: pgmap v463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:51.848+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:52.825+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:53 compute-2 ceph-mon[77081]: pgmap v464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:53.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:54 compute-2 sshd-session[114176]: Connection closed by 192.168.122.30 port 34722
Jan 22 13:43:54 compute-2 sshd-session[114125]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:43:54 compute-2 systemd[1]: session-43.scope: Deactivated successfully.
Jan 22 13:43:54 compute-2 systemd[1]: session-43.scope: Consumed 21.706s CPU time.
Jan 22 13:43:54 compute-2 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Jan 22 13:43:54 compute-2 systemd-logind[787]: Removed session 43.
Jan 22 13:43:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:54.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:54.899+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:55 compute-2 ceph-mon[77081]: pgmap v465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:55.849+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:43:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:56.889+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:57.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:57 compute-2 ceph-mon[77081]: pgmap v466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:57.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:43:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:43:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:43:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:58.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:43:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:43:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:59.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:43:59 compute-2 ceph-mon[77081]: pgmap v467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:43:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:43:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:59.866+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:43:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-2 sshd-session[120980]: Accepted publickey for zuul from 192.168.122.30 port 52134 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:44:00 compute-2 systemd-logind[787]: New session 44 of user zuul.
Jan 22 13:44:00 compute-2 systemd[1]: Started Session 44 of User zuul.
Jan 22 13:44:00 compute-2 sshd-session[120980]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:44:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-2 ceph-mon[77081]: pgmap v468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:00.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:00 compute-2 sudo[121134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjnsmystpycuapqwjoieunjjbuybdnwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089440.349426-30-180763065021373/AnsiballZ_file.py'
Jan 22 13:44:00 compute-2 sudo[121134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:01 compute-2 python3.9[121136]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:01 compute-2 sudo[121134]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:01 compute-2 sudo[121286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphzagtzbobtpnocagierrpqealrmhiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089441.3151555-64-172847734527370/AnsiballZ_stat.py'
Jan 22 13:44:01 compute-2 sudo[121286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:01.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:01 compute-2 python3.9[121288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:01 compute-2 sudo[121286]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:02 compute-2 sudo[121410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxibvugjbkloxwtmkfaoiirkcuzxrpsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089441.3151555-64-172847734527370/AnsiballZ_copy.py'
Jan 22 13:44:02 compute-2 sudo[121410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:02 compute-2 python3.9[121412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089441.3151555-64-172847734527370/.source.conf _original_basename=ceph.conf follow=False checksum=c3a8ec6ec08fd3904e44a403280c0742b2934d96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:02 compute-2 sudo[121410]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:02.760+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:02 compute-2 ceph-mon[77081]: pgmap v469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:02 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:03 compute-2 sudo[121562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hctmfhahthzqynnuaxuavhhzhljhzwex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089442.8313508-64-250243575326191/AnsiballZ_stat.py'
Jan 22 13:44:03 compute-2 sudo[121562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:03 compute-2 python3.9[121564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:03 compute-2 sudo[121562]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:03 compute-2 sudo[121685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qshqjxdzedttvcukhpbmmhhgjpxzjojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089442.8313508-64-250243575326191/AnsiballZ_copy.py'
Jan 22 13:44:03 compute-2 sudo[121685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:03.759+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:03 compute-2 python3.9[121687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089442.8313508-64-250243575326191/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8d4a0ad3eb7bcba9ed45036c12ef9de6a4ee9832 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:03 compute-2 sudo[121685]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:04 compute-2 sshd-session[120983]: Connection closed by 192.168.122.30 port 52134
Jan 22 13:44:04 compute-2 sshd-session[120980]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:44:04 compute-2 systemd[1]: session-44.scope: Deactivated successfully.
Jan 22 13:44:04 compute-2 systemd[1]: session-44.scope: Consumed 2.534s CPU time.
Jan 22 13:44:04 compute-2 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Jan 22 13:44:04 compute-2 systemd-logind[787]: Removed session 44.
Jan 22 13:44:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:04.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:04.784+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:04 compute-2 ceph-mon[77081]: pgmap v470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:05.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:05.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:44:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:06.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:44:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:06.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:07 compute-2 ceph-mon[77081]: pgmap v471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:07.833+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:08.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:08.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:09 compute-2 ceph-mon[77081]: pgmap v472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:09 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:09 compute-2 sudo[121715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:09 compute-2 sudo[121715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:09 compute-2 sudo[121715]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:09 compute-2 sudo[121740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:09 compute-2 sudo[121740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:09 compute-2 sudo[121740]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:09.840+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:09 compute-2 sshd-session[121765]: Accepted publickey for zuul from 192.168.122.30 port 60964 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:44:09 compute-2 systemd-logind[787]: New session 45 of user zuul.
Jan 22 13:44:09 compute-2 systemd[1]: Started Session 45 of User zuul.
Jan 22 13:44:09 compute-2 sshd-session[121765]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:44:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:10.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:10.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:10 compute-2 python3.9[121919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:44:11 compute-2 ceph-mon[77081]: pgmap v473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:11.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:11.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:12 compute-2 sudo[122073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocghivfcgecqutflxacowfdxgdlboetg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089451.624051-65-237577514191715/AnsiballZ_file.py'
Jan 22 13:44:12 compute-2 sudo[122073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:12 compute-2 python3.9[122075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:12 compute-2 sudo[122073]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:12 compute-2 sudo[122226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilduanfvvidizgeeyrtehhfyxabohvqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089452.4188578-65-147340974374794/AnsiballZ_file.py'
Jan 22 13:44:12 compute-2 sudo[122226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:12.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:12.865+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:12 compute-2 python3.9[122228]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:12 compute-2 sudo[122226]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:13 compute-2 ceph-mon[77081]: pgmap v474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:13.892+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:13 compute-2 python3.9[122378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:44:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:14 compute-2 sudo[122529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxsxkmaosigqomikonnqewnljhnhlequ ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089454.1906507-133-70204683943736/AnsiballZ_seboolean.py'
Jan 22 13:44:14 compute-2 sudo[122529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:14 compute-2 python3.9[122531]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 13:44:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:14.920+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:15 compute-2 ceph-mon[77081]: pgmap v475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:15.903+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:15 compute-2 sudo[122529]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:16.917+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:17 compute-2 sudo[122686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrsznmmktyqnhypxbrmxjczdvtarylg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089456.7358913-163-222344189142875/AnsiballZ_setup.py'
Jan 22 13:44:17 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 22 13:44:17 compute-2 sudo[122686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:44:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:17.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:44:17 compute-2 python3.9[122688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:44:17 compute-2 ceph-mon[77081]: pgmap v476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:17 compute-2 sudo[122686]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:17.872+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:18 compute-2 sudo[122770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbcosezzbyoxritcsvyykkabufhukwio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089456.7358913-163-222344189142875/AnsiballZ_dnf.py'
Jan 22 13:44:18 compute-2 sudo[122770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:18 compute-2 python3.9[122772]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:44:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:18.910+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:18.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:19 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:19.866+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:19 compute-2 sudo[122770]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:20 compute-2 ceph-mon[77081]: pgmap v477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:20 compute-2 sudo[122925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhdvzebbxbimwhthytwyvgugtiaudhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089460.117966-200-272491284198932/AnsiballZ_systemd.py'
Jan 22 13:44:20 compute-2 sudo[122925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:20.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:20.849+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:20 compute-2 python3.9[122927]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:44:21 compute-2 sudo[122925]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:21.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:21 compute-2 ceph-mon[77081]: pgmap v478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:21 compute-2 sudo[123080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhlxaqwjcituyfrecqgxewjsshbzgspv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089461.3549125-223-176214813838860/AnsiballZ_edpm_nftables_snippet.py'
Jan 22 13:44:21 compute-2 sudo[123080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:21.858+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:21 compute-2 python3[123082]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 22 13:44:21 compute-2 sudo[123080]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:22 compute-2 sudo[123233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfmuhgvwqjevunznhsgovzitlziprlkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089462.4867277-250-56130797648076/AnsiballZ_file.py'
Jan 22 13:44:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:22.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:22 compute-2 sudo[123233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:22.874+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:22 compute-2 python3.9[123235]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:22 compute-2 sudo[123233]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:23 compute-2 sudo[123385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvswcehzwsulzoyaatcxvuzvlfolzkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089463.2241423-274-40524426579557/AnsiballZ_stat.py'
Jan 22 13:44:23 compute-2 sudo[123385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:23 compute-2 ceph-mon[77081]: pgmap v479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:23 compute-2 python3.9[123387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:23 compute-2 sudo[123385]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:23.886+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:24 compute-2 sudo[123463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djtidcsqabepqaxcppzgsuhhnpdrexaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089463.2241423-274-40524426579557/AnsiballZ_file.py'
Jan 22 13:44:24 compute-2 sudo[123463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:24 compute-2 python3.9[123465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:24 compute-2 sudo[123463]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:24 compute-2 ceph-mon[77081]: pgmap v480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:24.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:24.927+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:24 compute-2 sudo[123616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezzlmtduiebbtsdwwojwtrqzecwnpwfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089464.6484525-311-106706735997280/AnsiballZ_stat.py'
Jan 22 13:44:24 compute-2 sudo[123616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:25 compute-2 python3.9[123618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:25 compute-2 sudo[123616]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:25 compute-2 sudo[123694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cepqjrefpdrxeamghyjlrlybwmfebhjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089464.6484525-311-106706735997280/AnsiballZ_file.py'
Jan 22 13:44:25 compute-2 sudo[123694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:25 compute-2 python3.9[123696]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yoyo3fgj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:25 compute-2 sudo[123694]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:25.914+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:26 compute-2 sudo[123846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dppzyjxfdgzunkmlzaqtkmjbqjfoomns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089465.93706-346-151184621896216/AnsiballZ_stat.py'
Jan 22 13:44:26 compute-2 sudo[123846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:26 compute-2 python3.9[123848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:26 compute-2 sudo[123846]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:26 compute-2 sudo[123925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnfkujioxbvluutgclckpqpvjakudgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089465.93706-346-151184621896216/AnsiballZ_file.py'
Jan 22 13:44:26 compute-2 sudo[123925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:26 compute-2 ceph-mon[77081]: pgmap v481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:26.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:26 compute-2 python3.9[123927]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:26 compute-2 sudo[123925]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:26.919+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:27 compute-2 sudo[124077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqllwipnazpeidammtunbbtsfrukjyxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089467.2675052-385-149938469511249/AnsiballZ_command.py'
Jan 22 13:44:27 compute-2 sudo[124077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:27.935+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:28 compute-2 python3.9[124079]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:28 compute-2 sudo[124077]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:28 compute-2 sudo[124231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neujlyvumjunrcwrqaqlwdvukyyyvzol ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089468.2583394-409-67998860644703/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:44:28 compute-2 sudo[124231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:28.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:28 compute-2 python3[124233]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:44:28 compute-2 sudo[124231]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:28.939+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:29 compute-2 ceph-mon[77081]: pgmap v482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:29 compute-2 sudo[124404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owzrkxmrgxagcsblvtmodrzgudybmrio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089469.4669268-433-970236791258/AnsiballZ_stat.py'
Jan 22 13:44:29 compute-2 sudo[124404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:29 compute-2 sudo[124365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:29 compute-2 sudo[124365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:29 compute-2 sudo[124365]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:29 compute-2 sudo[124411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:29 compute-2 sudo[124411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:29 compute-2 sudo[124411]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:29.925+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:29 compute-2 python3.9[124408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:30 compute-2 sudo[124404]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:30 compute-2 sudo[124559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liidxhtkdkqrvmzcbnsvdoifyjoyaarh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089469.4669268-433-970236791258/AnsiballZ_copy.py'
Jan 22 13:44:30 compute-2 sudo[124559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:30 compute-2 python3.9[124561]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089469.4669268-433-970236791258/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:30 compute-2 sudo[124559]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:30.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:31 compute-2 sudo[124591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:31 compute-2 sudo[124591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-2 sudo[124591]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-2 sudo[124648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:44:31 compute-2 sudo[124648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-2 sudo[124648]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-2 ceph-mon[77081]: pgmap v483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:31 compute-2 sudo[124688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:31 compute-2 sudo[124688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-2 sudo[124688]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:31 compute-2 sudo[124736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:44:31 compute-2 sudo[124736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:31 compute-2 sudo[124811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcccstkwipkvapaezhqwlpqwsnmimdqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089471.117775-478-201123633362601/AnsiballZ_stat.py'
Jan 22 13:44:31 compute-2 sudo[124811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:31 compute-2 python3.9[124813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:31 compute-2 sudo[124811]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:31 compute-2 podman[124913]: 2026-01-22 13:44:31.883812827 +0000 UTC m=+0.061717124 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 13:44:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:31.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:32 compute-2 podman[124913]: 2026-01-22 13:44:32.008709072 +0000 UTC m=+0.186613339 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 13:44:32 compute-2 sudo[125059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydolncvirfyivldsqpfoueztxuvjuyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089471.117775-478-201123633362601/AnsiballZ_copy.py'
Jan 22 13:44:32 compute-2 sudo[125059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:32 compute-2 python3.9[125067]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089471.117775-478-201123633362601/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:32 compute-2 sudo[125059]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:32 compute-2 podman[125195]: 2026-01-22 13:44:32.53248876 +0000 UTC m=+0.050203806 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:44:32 compute-2 podman[125195]: 2026-01-22 13:44:32.543884355 +0000 UTC m=+0.061599391 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:44:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:32.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:32 compute-2 podman[125331]: 2026-01-22 13:44:32.811185744 +0000 UTC m=+0.103937295 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 13:44:32 compute-2 podman[125331]: 2026-01-22 13:44:32.825792665 +0000 UTC m=+0.118544236 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Jan 22 13:44:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:32.876+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:32 compute-2 sudo[124736]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:32 compute-2 sudo[125417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibdabbneismlhixmncykxemxxmhgamvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089472.5185363-523-15736659686305/AnsiballZ_stat.py'
Jan 22 13:44:32 compute-2 sudo[125417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:32 compute-2 sudo[125420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:32 compute-2 sudo[125420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:32 compute-2 sudo[125420]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-2 python3.9[125419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:33 compute-2 sudo[125445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:44:33 compute-2 sudo[125445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:33 compute-2 sudo[125445]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-2 sudo[125417]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-2 sudo[125472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:33 compute-2 sudo[125472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:33 compute-2 sudo[125472]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-2 sudo[125498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:44:33 compute-2 sudo[125498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:33 compute-2 ceph-mon[77081]: pgmap v484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:33 compute-2 sudo[125657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-darvttjugkiytbnxymgcjrdcbqzfvchy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089472.5185363-523-15736659686305/AnsiballZ_copy.py'
Jan 22 13:44:33 compute-2 sudo[125657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:33 compute-2 sudo[125498]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-2 python3.9[125660]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089472.5185363-523-15736659686305/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:33 compute-2 sudo[125657]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:33.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:34 compute-2 sudo[125826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smsdxhedklqjzqxdbgfzpsntwsthztus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089473.962673-569-3361112911399/AnsiballZ_stat.py'
Jan 22 13:44:34 compute-2 sudo[125826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:34 compute-2 python3.9[125828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:34 compute-2 sudo[125826]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:44:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:44:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:44:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:44:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:44:34 compute-2 sudo[125952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlyiljahdldycgxutwupgasltlayhcjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089473.962673-569-3361112911399/AnsiballZ_copy.py'
Jan 22 13:44:34 compute-2 sudo[125952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:34.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:34 compute-2 python3.9[125954]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089473.962673-569-3361112911399/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:34 compute-2 sudo[125952]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:35 compute-2 ceph-mon[77081]: pgmap v485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:35 compute-2 sudo[126104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doiuzoebvpzeoohnnjopolaizswhdgtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089475.3304865-613-8667519708261/AnsiballZ_stat.py'
Jan 22 13:44:35 compute-2 sudo[126104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:35 compute-2 python3.9[126106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:35.850+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:35 compute-2 sudo[126104]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:36 compute-2 sudo[126229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddvwcjajvfswtkhhibqakucewrecpff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089475.3304865-613-8667519708261/AnsiballZ_copy.py'
Jan 22 13:44:36 compute-2 sudo[126229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:36 compute-2 python3.9[126231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089475.3304865-613-8667519708261/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:36 compute-2 sudo[126229]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:36.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:36.813+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:36 compute-2 sudo[126382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwtzivcbpkoqutiiqkrmrfqktwfsdmnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089476.7171574-658-52655063446244/AnsiballZ_file.py'
Jan 22 13:44:36 compute-2 sudo[126382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:37 compute-2 python3.9[126384]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:37 compute-2 sudo[126382]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:37 compute-2 ceph-mon[77081]: pgmap v486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:37.778+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:37 compute-2 sudo[126534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vijmmtwhwxraygtbmjzqkwhoyiwmqcup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089477.4868996-682-193695648421645/AnsiballZ_command.py'
Jan 22 13:44:37 compute-2 sudo[126534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:37 compute-2 python3.9[126536]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:38 compute-2 sudo[126534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:38 compute-2 sudo[126690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutufozbsnpbowyamgjjrzehyfrfpfgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089478.26878-707-210030224485050/AnsiballZ_blockinfile.py'
Jan 22 13:44:38 compute-2 sudo[126690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:38.801+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:38.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:38 compute-2 python3.9[126692]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:38 compute-2 sudo[126690]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:39.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:39 compute-2 ceph-mon[77081]: pgmap v487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:39 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:39 compute-2 sudo[126842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvfpuslmpowumxhptygpjkquicudlng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089479.323082-734-44621494784662/AnsiballZ_command.py'
Jan 22 13:44:39 compute-2 sudo[126842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:39.767+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:39 compute-2 python3.9[126844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:39 compute-2 sudo[126842]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-2 sudo[126923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:40 compute-2 sudo[126923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:40 compute-2 sudo[126923]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-2 sudo[126971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:44:40 compute-2 sudo[126971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:40 compute-2 sudo[126971]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-2 sudo[127046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkqlxwhunftypmngjwxcsewfgfivsmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089480.2289932-758-119511846108945/AnsiballZ_stat.py'
Jan 22 13:44:40 compute-2 sudo[127046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:40 compute-2 python3.9[127048]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:44:40 compute-2 sudo[127046]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:40.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:41 compute-2 ceph-mon[77081]: pgmap v488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:44:41 compute-2 sudo[127200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swtfhisonwrznsesfkuzvdpxbhbgmmfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089480.963259-782-57192646483565/AnsiballZ_command.py'
Jan 22 13:44:41 compute-2 sudo[127200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:41.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:41 compute-2 python3.9[127202]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:41 compute-2 sudo[127200]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:41.809+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:42 compute-2 sudo[127355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinvsvopojguekzwyadiezmytubtdroe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089481.7522526-806-37155557784361/AnsiballZ_file.py'
Jan 22 13:44:42 compute-2 sudo[127355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:42 compute-2 python3.9[127357]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:42 compute-2 sudo[127355]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:42.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:43.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:43 compute-2 ceph-mon[77081]: pgmap v489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:43 compute-2 python3.9[127508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:44:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:43.771+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:44.771+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:44:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:44:44 compute-2 sudo[127660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llbpqxjsywrnilfbigbgkszoxeosrnkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089484.6282933-925-158048649296262/AnsiballZ_command.py'
Jan 22 13:44:44 compute-2 sudo[127660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:45 compute-2 python3.9[127662]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:45 compute-2 ovs-vsctl[127663]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 22 13:44:45 compute-2 sudo[127660]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:45 compute-2 ceph-mon[77081]: pgmap v490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:45.734+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:45 compute-2 sudo[127813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkiyzdhowujpsnumyratrkyekwnzcts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089485.5440714-953-231629656294109/AnsiballZ_command.py'
Jan 22 13:44:45 compute-2 sudo[127813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:46 compute-2 python3.9[127815]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:46 compute-2 sudo[127813]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:46 compute-2 sudo[127969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dazbsxyylynhydlvbkcjtadjqxftcrzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089486.3981166-976-74139350901029/AnsiballZ_command.py'
Jan 22 13:44:46 compute-2 sudo[127969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:46.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:46 compute-2 python3.9[127971]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:44:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:46.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:46 compute-2 ovs-vsctl[127972]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 22 13:44:46 compute-2 sudo[127969]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:47 compute-2 ceph-mon[77081]: pgmap v491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:47.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:47 compute-2 python3.9[128122]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:44:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:48 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:48 compute-2 sudo[128275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prrauhvbvqbeslqbrsierfjugmcklxuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089488.2615147-1028-186688930587063/AnsiballZ_file.py'
Jan 22 13:44:48 compute-2 sudo[128275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:48.733+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:48 compute-2 python3.9[128277]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:48 compute-2 sudo[128275]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:48.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:49 compute-2 ceph-mon[77081]: pgmap v492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:49 compute-2 sudo[128429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbpohqcbqgvboiilywnnlafufjzmagkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089489.0399358-1052-263635431570190/AnsiballZ_stat.py'
Jan 22 13:44:49 compute-2 sudo[128429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:49 compute-2 python3.9[128431]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:49 compute-2 sshd-session[128337]: Invalid user sol from 92.118.39.95 port 45336
Jan 22 13:44:49 compute-2 sudo[128429]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:49 compute-2 sshd-session[128337]: Connection closed by invalid user sol 92.118.39.95 port 45336 [preauth]
Jan 22 13:44:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:49.743+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:49 compute-2 sudo[128507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqepiwfslreuukbaidgxovxkauwkmbji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089489.0399358-1052-263635431570190/AnsiballZ_file.py'
Jan 22 13:44:49 compute-2 sudo[128507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:49 compute-2 python3.9[128509]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:49 compute-2 sudo[128507]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:50 compute-2 sudo[128510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:50 compute-2 sudo[128510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:50 compute-2 sudo[128510]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:50 compute-2 sudo[128548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:44:50 compute-2 sudo[128548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:44:50 compute-2 sudo[128548]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:50 compute-2 sudo[128710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apcdjvvtfjpnbajwabwzwdpnjitkjmju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089490.1572177-1052-204309847937019/AnsiballZ_stat.py'
Jan 22 13:44:50 compute-2 sudo[128710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:50 compute-2 python3.9[128712]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:50.707+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:50 compute-2 sudo[128710]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:44:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:50.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:44:50 compute-2 sudo[128788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufhhrqrcrqrpsdsyzbyuiolrpqyuigdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089490.1572177-1052-204309847937019/AnsiballZ_file.py'
Jan 22 13:44:50 compute-2 sudo[128788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:51 compute-2 python3.9[128790]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:44:51 compute-2 sudo[128788]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:51 compute-2 ceph-mon[77081]: pgmap v493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:51 compute-2 sudo[128940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtaylyvmvdhqamhtdppsulwucqoiexz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089491.4979753-1121-50741469425029/AnsiballZ_file.py'
Jan 22 13:44:51 compute-2 sudo[128940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:51.749+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:51 compute-2 python3.9[128942]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:51 compute-2 sudo[128940]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:52 compute-2 sudo[129093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvrozudbwxzfttjkfhdsqnsxejgrlyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089492.3200521-1145-225103612523997/AnsiballZ_stat.py'
Jan 22 13:44:52 compute-2 sudo[129093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:52 compute-2 ceph-mon[77081]: pgmap v494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:52.721+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:52 compute-2 python3.9[129095]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:52.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:52 compute-2 sudo[129093]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:53 compute-2 sudo[129171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwttymwdoktykvjpmptolcjkyangfqdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089492.3200521-1145-225103612523997/AnsiballZ_file.py'
Jan 22 13:44:53 compute-2 sudo[129171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:53 compute-2 python3.9[129173]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:53 compute-2 sudo[129171]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:53.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:54 compute-2 sudo[129323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdlipirlbmiajcldciopbgrsrgmpmhjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089493.685123-1181-75070429686442/AnsiballZ_stat.py'
Jan 22 13:44:54 compute-2 sudo[129323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:54 compute-2 python3.9[129325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:54 compute-2 sudo[129323]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:54 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:54 compute-2 sudo[129402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqqmpuprvtbptigxazoydqjfkcmiczxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089493.685123-1181-75070429686442/AnsiballZ_file.py'
Jan 22 13:44:54 compute-2 sudo[129402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:54 compute-2 python3.9[129404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:54 compute-2 sudo[129402]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:54.742+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:54.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:55 compute-2 ceph-mon[77081]: pgmap v495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:55 compute-2 sudo[129554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqnzcvxmdphcvqaatqatsxpbktztqod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089495.0195491-1217-39474536496790/AnsiballZ_systemd.py'
Jan 22 13:44:55 compute-2 sudo[129554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:55 compute-2 python3.9[129556]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:44:55 compute-2 systemd[1]: Reloading.
Jan 22 13:44:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:55.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:55 compute-2 systemd-sysv-generator[129585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:44:55 compute-2 systemd-rc-local-generator[129582]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:44:56 compute-2 sudo[129554]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:44:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:56.691+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:57.677+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:57 compute-2 sudo[129743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isaehwczfegtazoiogizyrgkjoicrllh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089497.4794984-1241-6261443391210/AnsiballZ_stat.py'
Jan 22 13:44:57 compute-2 sudo[129743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:58 compute-2 python3.9[129745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:58 compute-2 sudo[129743]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:58 compute-2 sudo[129821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbafimubqcxyeyybjunnlcwksqambkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089497.4794984-1241-6261443391210/AnsiballZ_file.py'
Jan 22 13:44:58 compute-2 sudo[129821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:58 compute-2 python3.9[129823]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:58 compute-2 sudo[129821]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:58 compute-2 ceph-mon[77081]: pgmap v496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:58.693+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:58.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:58 compute-2 sudo[129976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jesqmcyzsjunuptgcqmvmsdwawtwqvxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089498.688874-1276-56218314184435/AnsiballZ_stat.py'
Jan 22 13:44:58 compute-2 sudo[129976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:59 compute-2 sshd-session[129825]: Invalid user sol from 45.148.10.240 port 58342
Jan 22 13:44:59 compute-2 sshd-session[129825]: Connection closed by invalid user sol 45.148.10.240 port 58342 [preauth]
Jan 22 13:44:59 compute-2 python3.9[129978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:44:59 compute-2 sudo[129976]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:44:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:44:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:44:59 compute-2 sudo[130054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smntixwyxshwhicjqipyvibomudqblci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089498.688874-1276-56218314184435/AnsiballZ_file.py'
Jan 22 13:44:59 compute-2 sudo[130054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:44:59 compute-2 python3.9[130056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:44:59 compute-2 sudo[130054]: pam_unix(sudo:session): session closed for user root
Jan 22 13:44:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:59 compute-2 ceph-mon[77081]: pgmap v497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:44:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:44:59 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:44:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:59.731+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:44:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:00 compute-2 sudo[130207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srhxiwvxnntjwegmhntvtrzdlclccpjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089499.9719872-1312-231484325002277/AnsiballZ_systemd.py'
Jan 22 13:45:00 compute-2 sudo[130207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:00 compute-2 python3.9[130209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:45:00 compute-2 systemd[1]: Reloading.
Jan 22 13:45:00 compute-2 systemd-rc-local-generator[130232]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:00 compute-2 systemd-sysv-generator[130237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:00.770+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:01 compute-2 systemd[1]: Starting Create netns directory...
Jan 22 13:45:01 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:45:01 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:45:01 compute-2 systemd[1]: Finished Create netns directory.
Jan 22 13:45:01 compute-2 sudo[130207]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:01 compute-2 ceph-mon[77081]: pgmap v498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:01.792+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:02 compute-2 sudo[130401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgccizuitrlseygeszjlcpatktkxrkbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089501.5032318-1344-129802461505506/AnsiballZ_file.py'
Jan 22 13:45:02 compute-2 sudo[130401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:02 compute-2 python3.9[130403]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:02 compute-2 sudo[130401]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:02 compute-2 sudo[130554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umabpynksalpvugzgzvrptbjzgfxtboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089502.4662876-1366-12125711191895/AnsiballZ_stat.py'
Jan 22 13:45:02 compute-2 sudo[130554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:02.803+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:02 compute-2 python3.9[130556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:02 compute-2 sudo[130554]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:03 compute-2 sudo[130677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtyqnvldkpwsjqjrggwreiiwqokorki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089502.4662876-1366-12125711191895/AnsiballZ_copy.py'
Jan 22 13:45:03 compute-2 sudo[130677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:03 compute-2 python3.9[130679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089502.4662876-1366-12125711191895/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:03 compute-2 sudo[130677]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:03 compute-2 ceph-mon[77081]: pgmap v499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:03.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:04 compute-2 sudo[130830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oojkbkuaeintkozrbvbovcdkxrxujyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089504.2687097-1418-30238472317216/AnsiballZ_file.py'
Jan 22 13:45:04 compute-2 sudo[130830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:04.762+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:04 compute-2 python3.9[130832]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:04 compute-2 sudo[130830]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:04.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:05 compute-2 ceph-mon[77081]: pgmap v500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:05 compute-2 sudo[130982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtbbbbmrezbsulraxnhdggyfmchbnwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089505.0705364-1442-261701869972234/AnsiballZ_file.py'
Jan 22 13:45:05 compute-2 sudo[130982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:05.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:05 compute-2 python3.9[130984]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:05 compute-2 sudo[130982]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:05.783+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:06 compute-2 sudo[131134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drxrqzjjwoqzlxjvrzofsklfvzkfsfvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089505.9257116-1465-108793043498901/AnsiballZ_stat.py'
Jan 22 13:45:06 compute-2 sudo[131134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:06 compute-2 python3.9[131136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:06 compute-2 sudo[131134]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:06 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:45:06 compute-2 sudo[131259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsdjpshmwlbknmbpowldgmvqsloanqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089505.9257116-1465-108793043498901/AnsiballZ_copy.py'
Jan 22 13:45:06 compute-2 sudo[131259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:06.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:06 compute-2 python3.9[131261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089505.9257116-1465-108793043498901/.source.json _original_basename=.t77mqyzy follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:06 compute-2 sudo[131259]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.690275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507690369, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2364, "num_deletes": 251, "total_data_size": 4762286, "memory_usage": 4808880, "flush_reason": "Manual Compaction"}
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 22 13:45:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:07 compute-2 ceph-mon[77081]: pgmap v501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507720456, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3097044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10297, "largest_seqno": 12656, "table_properties": {"data_size": 3088227, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21978, "raw_average_key_size": 20, "raw_value_size": 3068799, "raw_average_value_size": 2919, "num_data_blocks": 220, "num_entries": 1051, "num_filter_entries": 1051, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089343, "oldest_key_time": 1769089343, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 30227 microseconds, and 7083 cpu microseconds.
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.720512) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3097044 bytes OK
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.720528) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.723871) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.723912) EVENT_LOG_v1 {"time_micros": 1769089507723903, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.723931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4751571, prev total WAL file size 4751571, number of live WAL files 2.
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.725579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3024KB)], [21(7706KB)]
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507725617, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10988923, "oldest_snapshot_seqno": -1}
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4557 keys, 8311586 bytes, temperature: kUnknown
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507785230, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8311586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8279760, "index_size": 19300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 112570, "raw_average_key_size": 24, "raw_value_size": 8195764, "raw_average_value_size": 1798, "num_data_blocks": 819, "num_entries": 4557, "num_filter_entries": 4557, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.785586) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8311586 bytes
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.787531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.7 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 5076, records dropped: 519 output_compression: NoCompression
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.787569) EVENT_LOG_v1 {"time_micros": 1769089507787555, "job": 10, "event": "compaction_finished", "compaction_time_micros": 59828, "compaction_time_cpu_micros": 19372, "output_level": 6, "num_output_files": 1, "total_output_size": 8311586, "num_input_records": 5076, "num_output_records": 4557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:45:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:07.787+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507788855, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 22 13:45:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507790089, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.725016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:45:07 compute-2 python3.9[131411]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:08 compute-2 ceph-mon[77081]: pgmap v502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:08.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:45:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:45:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:09.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:10 compute-2 sudo[131760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:10 compute-2 sudo[131760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:10 compute-2 sudo[131760]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:10 compute-2 sudo[131786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:10 compute-2 sudo[131786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:10 compute-2 sudo[131786]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:10 compute-2 sudo[131884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrqbkshzajsnqxvuefvdwtfzvdavrkko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089509.881086-1585-81296637776730/AnsiballZ_container_config_data.py'
Jan 22 13:45:10 compute-2 sudo[131884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:10 compute-2 python3.9[131886]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 22 13:45:10 compute-2 sudo[131884]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:10 compute-2 ceph-mon[77081]: pgmap v503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:45:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:10.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:45:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:10.855+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:11.829+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:12 compute-2 sudo[132036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyhljiajnkwcohjytgcgupyfprgjxciv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089511.511773-1618-71566417411105/AnsiballZ_container_config_hash.py'
Jan 22 13:45:12 compute-2 sudo[132036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:12 compute-2 python3.9[132038]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:45:12 compute-2 sudo[132036]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:12 compute-2 ceph-mon[77081]: pgmap v504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:12 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:12.869+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:13.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:13 compute-2 sudo[132189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opnbfrimqrywueswegfabcqipajmtqgm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089513.1010666-1648-253347997596471/AnsiballZ_edpm_container_manage.py'
Jan 22 13:45:13 compute-2 sudo[132189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:13.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:14 compute-2 python3[132191]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:45:14 compute-2 ceph-mon[77081]: pgmap v505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:14.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:15.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:16 compute-2 ceph-mon[77081]: pgmap v506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:16.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:17.881+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:18.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:19 compute-2 ceph-mon[77081]: pgmap v507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:19 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:19.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:20 compute-2 podman[132203]: 2026-01-22 13:45:20.031159846 +0000 UTC m=+5.779026092 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 13:45:20 compute-2 podman[132332]: 2026-01-22 13:45:20.157366555 +0000 UTC m=+0.048029427 container create 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 13:45:20 compute-2 podman[132332]: 2026-01-22 13:45:20.132233602 +0000 UTC m=+0.022896494 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 13:45:20 compute-2 python3[132191]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 13:45:20 compute-2 sudo[132189]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:20 compute-2 ceph-mon[77081]: pgmap v508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:20.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:20.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:20 compute-2 sudo[132521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plyybpmzbuaeevlwymaqhqhntfrreioo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089520.578231-1673-35324427612010/AnsiballZ_stat.py'
Jan 22 13:45:20 compute-2 sudo[132521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:21 compute-2 python3.9[132523]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:45:21 compute-2 sudo[132521]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:21.821+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:22.781+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:22 compute-2 sudo[132676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amsczersdwpzxruffmngbsxjnhkymfti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089522.5722272-1699-89211553642826/AnsiballZ_file.py'
Jan 22 13:45:22 compute-2 sudo[132676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:22.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:22 compute-2 ceph-mon[77081]: pgmap v509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:23 compute-2 python3.9[132678]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:23 compute-2 sudo[132676]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:23 compute-2 sudo[132752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcscvyuurayqjfdqnoclnwcdrxgogrmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089522.5722272-1699-89211553642826/AnsiballZ_stat.py'
Jan 22 13:45:23 compute-2 sudo[132752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:23 compute-2 python3.9[132754]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:45:23 compute-2 sudo[132752]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:23.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:24 compute-2 sudo[132908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbrixjzodxswafxrtyujiurdtpsvufrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089523.5854223-1699-162723346994107/AnsiballZ_copy.py'
Jan 22 13:45:24 compute-2 sudo[132908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:24.823+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:24 compute-2 python3.9[132911]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089523.5854223-1699-162723346994107/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:24 compute-2 sudo[132908]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:25 compute-2 sudo[132985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvzohctvsstczjbidoazbjieadimrqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089523.5854223-1699-162723346994107/AnsiballZ_systemd.py'
Jan 22 13:45:25 compute-2 sudo[132985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:25 compute-2 ceph-mon[77081]: pgmap v510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:25 compute-2 python3.9[132987]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:45:25 compute-2 systemd[1]: Reloading.
Jan 22 13:45:25 compute-2 systemd-rc-local-generator[133014]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:25 compute-2 systemd-sysv-generator[133019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:25.802+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:25 compute-2 sudo[132985]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:26 compute-2 sudo[133097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqbutuueugxnjmylezrgtkskwtsuymih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089523.5854223-1699-162723346994107/AnsiballZ_systemd.py'
Jan 22 13:45:26 compute-2 sudo[133097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:26 compute-2 python3.9[133099]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:45:26 compute-2 systemd[1]: Reloading.
Jan 22 13:45:26 compute-2 systemd-sysv-generator[133130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:26 compute-2 systemd-rc-local-generator[133127]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:26.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:26.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:27 compute-2 systemd[1]: Starting ovn_controller container...
Jan 22 13:45:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:27 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:45:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734721b55cc982c684897978a32ef7483dd133591a02eac7552c372dda4a22e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 22 13:45:27 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356.
Jan 22 13:45:27 compute-2 podman[133141]: 2026-01-22 13:45:27.478365841 +0000 UTC m=+0.331778876 container init 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + sudo -E kolla_set_configs
Jan 22 13:45:27 compute-2 podman[133141]: 2026-01-22 13:45:27.503160955 +0000 UTC m=+0.356573980 container start 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:45:27 compute-2 edpm-start-podman-container[133141]: ovn_controller
Jan 22 13:45:27 compute-2 systemd[1]: Created slice User Slice of UID 0.
Jan 22 13:45:27 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 22 13:45:27 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 22 13:45:27 compute-2 systemd[1]: Starting User Manager for UID 0...
Jan 22 13:45:27 compute-2 edpm-start-podman-container[133140]: Creating additional drop-in dependency for "ovn_controller" (8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356)
Jan 22 13:45:27 compute-2 systemd[133194]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 22 13:45:27 compute-2 podman[133163]: 2026-01-22 13:45:27.570285273 +0000 UTC m=+0.057938583 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 13:45:27 compute-2 systemd[1]: 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356-44e4d69ad703dadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 13:45:27 compute-2 systemd[1]: 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356-44e4d69ad703dadb.service: Failed with result 'exit-code'.
Jan 22 13:45:27 compute-2 systemd[1]: Reloading.
Jan 22 13:45:27 compute-2 systemd-rc-local-generator[133240]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:45:27 compute-2 systemd-sysv-generator[133244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:45:27 compute-2 systemd[133194]: Queued start job for default target Main User Target.
Jan 22 13:45:27 compute-2 systemd[133194]: Created slice User Application Slice.
Jan 22 13:45:27 compute-2 systemd[133194]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 22 13:45:27 compute-2 systemd[133194]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 13:45:27 compute-2 systemd[133194]: Reached target Paths.
Jan 22 13:45:27 compute-2 systemd[133194]: Reached target Timers.
Jan 22 13:45:27 compute-2 systemd[133194]: Starting D-Bus User Message Bus Socket...
Jan 22 13:45:27 compute-2 systemd[133194]: Starting Create User's Volatile Files and Directories...
Jan 22 13:45:27 compute-2 systemd[133194]: Finished Create User's Volatile Files and Directories.
Jan 22 13:45:27 compute-2 systemd[133194]: Listening on D-Bus User Message Bus Socket.
Jan 22 13:45:27 compute-2 systemd[133194]: Reached target Sockets.
Jan 22 13:45:27 compute-2 systemd[133194]: Reached target Basic System.
Jan 22 13:45:27 compute-2 systemd[133194]: Reached target Main User Target.
Jan 22 13:45:27 compute-2 systemd[133194]: Startup finished in 133ms.
Jan 22 13:45:27 compute-2 systemd[1]: Started User Manager for UID 0.
Jan 22 13:45:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:27.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:27 compute-2 systemd[1]: Started ovn_controller container.
Jan 22 13:45:27 compute-2 systemd[1]: Started Session c1 of User root.
Jan 22 13:45:27 compute-2 sudo[133097]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:27 compute-2 ovn_controller[133156]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:45:27 compute-2 ovn_controller[133156]: INFO:__main__:Validating config file
Jan 22 13:45:27 compute-2 ovn_controller[133156]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:45:27 compute-2 ovn_controller[133156]: INFO:__main__:Writing out command to execute
Jan 22 13:45:27 compute-2 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 22 13:45:27 compute-2 ovn_controller[133156]: ++ cat /run_command
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + ARGS=
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + sudo kolla_copy_cacerts
Jan 22 13:45:27 compute-2 systemd[1]: Started Session c2 of User root.
Jan 22 13:45:27 compute-2 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + [[ ! -n '' ]]
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + . kolla_extend_start
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 22 13:45:27 compute-2 ovn_controller[133156]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + umask 0022
Jan 22 13:45:27 compute-2 ovn_controller[133156]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 22 13:45:27 compute-2 ovn_controller[133156]: 2026-01-22T13:45:27Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 13:45:27 compute-2 ovn_controller[133156]: 2026-01-22T13:45:27Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 13:45:27 compute-2 ovn_controller[133156]: 2026-01-22T13:45:27Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 22 13:45:27 compute-2 ovn_controller[133156]: 2026-01-22T13:45:27Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0071] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0078] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <warn>  [1769089528.0081] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0087] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0093] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0096] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 13:45:28 compute-2 kernel: br-int: entered promiscuous mode
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00013|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00014|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00015|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 22 13:45:28 compute-2 systemd-udevd[133285]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00017|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00018|features|INFO|OVS Feature: ct_flush, state: supported
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00019|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00021|main|INFO|OVS feature set changed, force recompute.
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0619] manager: (ovn-c803af-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0625] manager: (ovn-d9fd1e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 22 13:45:28 compute-2 kernel: genev_sys_6081: entered promiscuous mode
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0800] device (genev_sys_6081): carrier: link connected
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.0805] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Jan 22 13:45:28 compute-2 ovn_controller[133156]: 2026-01-22T13:45:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 13:45:28 compute-2 ceph-mon[77081]: pgmap v511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:28 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:28 compute-2 NetworkManager[49000]: <info>  [1769089528.5387] manager: (ovn-7335e4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 22 13:45:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:28.864+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:45:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2035 writes, 12K keys, 2035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2035 writes, 12K keys, 2035 commit groups, 1.0 writes per commit group, ingest: 23.75 MB, 0.04 MB/s
                                           Interval WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    117.1      0.13              0.03         5    0.025       0      0       0.0       0.0
                                             L6      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3    159.2    132.6      0.26              0.08         4    0.064     18K   1811       0.0       0.0
                                            Sum      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3    106.7    127.5      0.38              0.11         9    0.042     18K   1811       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3    107.7    128.7      0.38              0.11         8    0.047     18K   1811       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    159.2    132.6      0.26              0.08         4    0.064     18K   1811       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    120.4      0.12              0.03         4    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.014
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 1.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(62,1.13 MB,0.37106%) FilterBlock(9,59.98 KB,0.0192692%) IndexBlock(9,116.08 KB,0.0372887%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 13:45:29 compute-2 python3.9[133416]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 13:45:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:29 compute-2 ceph-mon[77081]: pgmap v512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:29.869+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:30 compute-2 sudo[133520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:30 compute-2 sudo[133520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:30 compute-2 sudo[133520]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:30 compute-2 sudo[133566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:30 compute-2 sudo[133566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:30 compute-2 sudo[133566]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:30 compute-2 sudo[133617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmclxvpjgaawocgzxcrqcgptxfzmlyse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089530.0587435-1834-150249655630305/AnsiballZ_stat.py'
Jan 22 13:45:30 compute-2 sudo[133617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:30 compute-2 python3.9[133619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:30 compute-2 sudo[133617]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:30.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:30.909+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:30 compute-2 sudo[133740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhkfvbctnxnbixnblmbsdgzctbitxdlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089530.0587435-1834-150249655630305/AnsiballZ_copy.py'
Jan 22 13:45:30 compute-2 sudo[133740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:31 compute-2 python3.9[133742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089530.0587435-1834-150249655630305/.source.yaml _original_basename=.yjmkrj2h follow=False checksum=46f66c8a157c96fcb7cc69848fe925e114c66b53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:45:31 compute-2 sudo[133740]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:31 compute-2 ceph-mon[77081]: pgmap v513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:31.909+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:31 compute-2 sudo[133892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxjeguxhzyfyjnjnvoanyjsoxakwlegg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089531.6212952-1879-214220074664872/AnsiballZ_command.py'
Jan 22 13:45:31 compute-2 sudo[133892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:32 compute-2 python3.9[133894]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:45:32 compute-2 ovs-vsctl[133895]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 22 13:45:32 compute-2 sudo[133892]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:32 compute-2 sudo[134046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazwrpvdrwjmaccndlvpcytzrxukpkqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089532.4601057-1903-224693285381364/AnsiballZ_command.py'
Jan 22 13:45:32 compute-2 sudo[134046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:32.929+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:32 compute-2 python3.9[134048]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:45:32 compute-2 ovs-vsctl[134050]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 22 13:45:32 compute-2 sudo[134046]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:33.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:33 compute-2 ceph-mon[77081]: pgmap v514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:33.940+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:34 compute-2 sudo[134201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvwcxbxbyvzgrjvtjkwmixaguetibcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089533.7971997-1945-50199709719761/AnsiballZ_command.py'
Jan 22 13:45:34 compute-2 sudo[134201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:34 compute-2 python3.9[134203]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:45:34 compute-2 ovs-vsctl[134205]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 22 13:45:34 compute-2 sudo[134201]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:34 compute-2 ceph-mon[77081]: pgmap v515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:34 compute-2 sshd-session[121768]: Connection closed by 192.168.122.30 port 60964
Jan 22 13:45:34 compute-2 sshd-session[121765]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:45:34 compute-2 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Jan 22 13:45:34 compute-2 systemd[1]: session-45.scope: Deactivated successfully.
Jan 22 13:45:34 compute-2 systemd[1]: session-45.scope: Consumed 56.147s CPU time.
Jan 22 13:45:34 compute-2 systemd-logind[787]: Removed session 45.
Jan 22 13:45:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:34.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:34.945+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:35.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:36.947+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:37 compute-2 ceph-mon[77081]: pgmap v516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:37.986+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:38 compute-2 systemd[1]: Stopping User Manager for UID 0...
Jan 22 13:45:38 compute-2 systemd[133194]: Activating special unit Exit the Session...
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped target Main User Target.
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped target Basic System.
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped target Paths.
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped target Sockets.
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped target Timers.
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 13:45:38 compute-2 systemd[133194]: Closed D-Bus User Message Bus Socket.
Jan 22 13:45:38 compute-2 systemd[133194]: Stopped Create User's Volatile Files and Directories.
Jan 22 13:45:38 compute-2 systemd[133194]: Removed slice User Application Slice.
Jan 22 13:45:38 compute-2 systemd[133194]: Reached target Shutdown.
Jan 22 13:45:38 compute-2 systemd[133194]: Finished Exit the Session.
Jan 22 13:45:38 compute-2 systemd[133194]: Reached target Exit the Session.
Jan 22 13:45:38 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Jan 22 13:45:38 compute-2 systemd[1]: Stopped User Manager for UID 0.
Jan 22 13:45:38 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 22 13:45:38 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 22 13:45:38 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 22 13:45:38 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 22 13:45:38 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Jan 22 13:45:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:38.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:38.980+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:39 compute-2 ceph-mon[77081]: pgmap v517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:39 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:39.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:39.948+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:40 compute-2 sshd-session[134233]: Accepted publickey for zuul from 192.168.122.30 port 55626 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:45:40 compute-2 systemd-logind[787]: New session 47 of user zuul.
Jan 22 13:45:40 compute-2 systemd[1]: Started Session 47 of User zuul.
Jan 22 13:45:40 compute-2 sshd-session[134233]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:45:40 compute-2 sudo[134290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:40 compute-2 sudo[134290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-2 sudo[134290]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:40 compute-2 sudo[134315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:45:40 compute-2 sudo[134315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-2 sudo[134315]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:40 compute-2 sudo[134361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:40 compute-2 sudo[134361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-2 sudo[134361]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:40 compute-2 sudo[134407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 13:45:40 compute-2 sudo[134407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:40.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:40.925+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:40 compute-2 sudo[134407]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:41 compute-2 python3.9[134499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:45:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:41.948+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:42 compute-2 sudo[134536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:42 compute-2 sudo[134536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:42 compute-2 sudo[134536]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:42 compute-2 sudo[134584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:45:42 compute-2 sudo[134584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:42 compute-2 sudo[134584]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:42 compute-2 sudo[134638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:42 compute-2 sudo[134638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:42 compute-2 sudo[134638]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:42 compute-2 sudo[134664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:45:42 compute-2 sudo[134664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:42 compute-2 sudo[134801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxlgqfadlwuahjovhbiufjscbokzobo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089542.225564-65-217949970302705/AnsiballZ_file.py'
Jan 22 13:45:42 compute-2 sudo[134801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:42 compute-2 ceph-mon[77081]: pgmap v518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 13:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 13:45:42 compute-2 ceph-mon[77081]: pgmap v519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 13:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:42 compute-2 podman[134838]: 2026-01-22 13:45:42.862267146 +0000 UTC m=+0.063774569 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 13:45:42 compute-2 python3.9[134807]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:42.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:42 compute-2 sudo[134801]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:42.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:42 compute-2 podman[134838]: 2026-01-22 13:45:42.954437103 +0000 UTC m=+0.155944506 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 13:45:43 compute-2 sudo[135097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwvbgggyoxwitkprfkzxuzslcoxksafi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089543.0311644-65-271750469159583/AnsiballZ_file.py'
Jan 22 13:45:43 compute-2 sudo[135097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:43 compute-2 python3.9[135106]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:43 compute-2 sudo[135097]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:43 compute-2 podman[135145]: 2026-01-22 13:45:43.623110091 +0000 UTC m=+0.065471555 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:45:43 compute-2 podman[135145]: 2026-01-22 13:45:43.634207118 +0000 UTC m=+0.076568562 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:45:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:43 compute-2 podman[135288]: 2026-01-22 13:45:43.846645257 +0000 UTC m=+0.054478290 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc.)
Jan 22 13:45:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:43.879+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:43 compute-2 podman[135331]: 2026-01-22 13:45:43.948500015 +0000 UTC m=+0.084140035 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 13:45:43 compute-2 podman[135288]: 2026-01-22 13:45:43.954432773 +0000 UTC m=+0.162265796 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, architecture=x86_64, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Jan 22 13:45:43 compute-2 sudo[135392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwqcwyhisluyzphhaddslmprwdevqav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089543.6787283-65-19700869301349/AnsiballZ_file.py'
Jan 22 13:45:43 compute-2 sudo[135392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:43 compute-2 sudo[134664]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-2 python3.9[135395]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:44 compute-2 sudo[135392]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-2 sudo[135396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:44 compute-2 sudo[135396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:44 compute-2 sudo[135396]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-2 sudo[135445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:45:44 compute-2 sudo[135445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:44 compute-2 sudo[135445]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-2 sudo[135488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:44 compute-2 sudo[135488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:44 compute-2 sudo[135488]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-2 sudo[135540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:45:44 compute-2 sudo[135540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:44 compute-2 sudo[135660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufxzjucmcqzneufvqpxvenvstwdtgmmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089544.318939-65-135403137172239/AnsiballZ_file.py'
Jan 22 13:45:44 compute-2 sudo[135660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:44.861+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:44 compute-2 sudo[135540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:44.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:44 compute-2 python3.9[135664]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:44 compute-2 sudo[135660]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:45 compute-2 sudo[135828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seezybkaxciwzdqzybrplialximqivxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089545.0833497-65-22288992339718/AnsiballZ_file.py'
Jan 22 13:45:45 compute-2 sudo[135828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:45 compute-2 python3.9[135830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:45 compute-2 sudo[135828]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:45.864+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:46 compute-2 ceph-mon[77081]: pgmap v520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:45:46 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:45:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:46 compute-2 python3.9[135982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:45:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:46.895+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:45:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:45:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:45:47 compute-2 ceph-mon[77081]: pgmap v521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:47 compute-2 sudo[136132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yamicltiaisqzpgcbzqcwzryvjhnhguh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089547.1618886-197-62615475506495/AnsiballZ_seboolean.py'
Jan 22 13:45:47 compute-2 sudo[136132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:47 compute-2 python3.9[136134]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 13:45:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:47.865+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:48 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:48 compute-2 sudo[136132]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:48.821+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:45:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:48.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:45:49 compute-2 ceph-mon[77081]: pgmap v522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:49 compute-2 python3.9[136285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:49.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:49.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:50 compute-2 python3.9[136406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089548.7588608-221-8133955253076/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:50 compute-2 sudo[136476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:50 compute-2 sudo[136476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:50 compute-2 sudo[136476]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:50 compute-2 sudo[136509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:50 compute-2 sudo[136509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:50 compute-2 sudo[136509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:50.835+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:50.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:51 compute-2 python3.9[136607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:51 compute-2 python3.9[136728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089550.483311-266-116379788181895/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:51 compute-2 ceph-mon[77081]: pgmap v523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:51.830+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-2 sudo[136879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyeaxljptrwhlykltqefdohmjrveraid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089552.0672371-317-245743978337616/AnsiballZ_setup.py'
Jan 22 13:45:52 compute-2 sudo[136879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:52 compute-2 ceph-mon[77081]: pgmap v524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:45:52 compute-2 python3.9[136881]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:45:52 compute-2 sudo[136882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:45:52 compute-2 sudo[136882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:52 compute-2 sudo[136882]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:52.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:52 compute-2 sudo[136915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:45:52 compute-2 sudo[136915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:45:52 compute-2 sudo[136915]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:52.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:52 compute-2 sudo[136879]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:53 compute-2 sudo[137013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxbiuwtbxeckytmehvtfxkbrfucxlkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089552.0672371-317-245743978337616/AnsiballZ_dnf.py'
Jan 22 13:45:53 compute-2 sudo[137013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:53 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:53 compute-2 python3.9[137015]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:45:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:53.828+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:54 compute-2 ceph-mon[77081]: pgmap v525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:54.822+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:54.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:54 compute-2 sudo[137013]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:55.773+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:56 compute-2 sudo[137167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wggepkdxxwdowqabvqtcfrdkyurymgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089555.387432-353-122455604183325/AnsiballZ_systemd.py'
Jan 22 13:45:56 compute-2 sudo[137167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:45:56 compute-2 python3.9[137169]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:45:56 compute-2 sudo[137167]: pam_unix(sudo:session): session closed for user root
Jan 22 13:45:56 compute-2 ceph-mon[77081]: pgmap v526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:45:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:56.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:56.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:57 compute-2 python3.9[137323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:45:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:45:57 compute-2 python3.9[137444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089556.6587877-377-7762467730545/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:45:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:57.825+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:57 compute-2 ovn_controller[133156]: 2026-01-22T13:45:57Z|00025|memory|INFO|16256 kB peak resident set size after 29.9 seconds
Jan 22 13:45:57 compute-2 ovn_controller[133156]: 2026-01-22T13:45:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 22 13:45:57 compute-2 podman[137445]: 2026-01-22 13:45:57.898515502 +0000 UTC m=+0.116224862 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 13:45:58 compute-2 ceph-mon[77081]: pgmap v527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:45:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:45:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:58.803+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 13:45:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:58.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 13:45:59 compute-2 python3.9[137619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:45:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:45:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 13:45:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 13:45:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:59.794+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:45:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:45:59 compute-2 python3.9[137740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089557.9259353-377-240621287087411/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:00 compute-2 ceph-mon[77081]: pgmap v528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:00.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 13:46:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:00.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 13:46:01 compute-2 python3.9[137891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:01.831+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:02 compute-2 python3.9[138012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089560.906417-510-276814229553118/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:02 compute-2 python3.9[138163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:02.824+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:02 compute-2 ceph-mon[77081]: pgmap v529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 13:46:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:02.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 13:46:03 compute-2 python3.9[138284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089562.3114493-510-185275370736306/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 13:46:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:03.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 13:46:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:03.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:04 compute-2 python3.9[138434]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:46:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:04.826+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:04 compute-2 ceph-mon[77081]: pgmap v530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 13:46:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 13:46:04 compute-2 sudo[138587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thozcuvfyryjeusywwtgwohhqzyyjwle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089564.6424446-623-32179262475420/AnsiballZ_file.py'
Jan 22 13:46:04 compute-2 sudo[138587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:05 compute-2 python3.9[138589]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:05 compute-2 sudo[138587]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 13:46:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 13:46:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:05.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:06 compute-2 sudo[138739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xznfmurvmpnzvgxnrteceukbdkrieiai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089565.7356887-648-175949438672466/AnsiballZ_stat.py'
Jan 22 13:46:06 compute-2 sudo[138739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:06 compute-2 python3.9[138741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:06 compute-2 sudo[138739]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:06 compute-2 sudo[138818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nawhsbbufgafpfabrmcyovhzxepjtsal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089565.7356887-648-175949438672466/AnsiballZ_file.py'
Jan 22 13:46:06 compute-2 sudo[138818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:06.773+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:06 compute-2 python3.9[138820]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:06 compute-2 sudo[138818]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:06 compute-2 ceph-mon[77081]: pgmap v531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:06.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:07 compute-2 sudo[138970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntinquxlhdzowcwuebsuwccjzwjbhtxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089567.083635-648-33726069640551/AnsiballZ_stat.py'
Jan 22 13:46:07 compute-2 sudo[138970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:07.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:07 compute-2 python3.9[138972]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:07 compute-2 sudo[138970]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:07.797+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:07 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:07 compute-2 sudo[139048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imrkaisttzhlnbpqujronzsqetwbpexf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089567.083635-648-33726069640551/AnsiballZ_file.py'
Jan 22 13:46:07 compute-2 sudo[139048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:08 compute-2 python3.9[139050]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:08 compute-2 sudo[139048]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:08 compute-2 sudo[139201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riihbqmniwqugynlhvuwejdzlsmxqobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089568.466944-716-171213448682852/AnsiballZ_file.py'
Jan 22 13:46:08 compute-2 sudo[139201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:08.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 13:46:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:08.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 13:46:08 compute-2 ceph-mon[77081]: pgmap v532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:09 compute-2 python3.9[139203]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:09 compute-2 sudo[139201]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:09.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:09 compute-2 sudo[139353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfldkzdpfobepwkkbnswzroiygxyfdpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089569.3673115-741-11153417502491/AnsiballZ_stat.py'
Jan 22 13:46:09 compute-2 sudo[139353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:09.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:09 compute-2 python3.9[139355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:10 compute-2 sudo[139353]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:10 compute-2 sudo[139431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xigzbuswkfljpxbycilnefwjsxxlvwco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089569.3673115-741-11153417502491/AnsiballZ_file.py'
Jan 22 13:46:10 compute-2 sudo[139431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:10 compute-2 python3.9[139434]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:10 compute-2 sudo[139431]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:10 compute-2 sudo[139459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:10 compute-2 sudo[139459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:10.768+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:10 compute-2 sudo[139459]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:10 compute-2 sudo[139484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:10 compute-2 sudo[139484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:10 compute-2 sudo[139484]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:10.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:11 compute-2 ceph-mon[77081]: pgmap v533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:11 compute-2 sudo[139634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqdjasscwskjdwyqsmrnjnhpaaxfbvoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089570.8629549-777-197942759782528/AnsiballZ_stat.py'
Jan 22 13:46:11 compute-2 sudo[139634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:11 compute-2 python3.9[139636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:11 compute-2 sudo[139634]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:11.803+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:11 compute-2 sudo[139712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qujthcncwnyrqltrqcctonpnhscomudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089570.8629549-777-197942759782528/AnsiballZ_file.py'
Jan 22 13:46:11 compute-2 sudo[139712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:11 compute-2 python3.9[139714]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:12 compute-2 sudo[139712]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:12 compute-2 sudo[139865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmtpfmyrdyjogplrapokkqomashhtecm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089572.295052-812-230895823885498/AnsiballZ_systemd.py'
Jan 22 13:46:12 compute-2 sudo[139865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:12.812+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:13 compute-2 python3.9[139867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:46:13 compute-2 systemd[1]: Reloading.
Jan 22 13:46:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:13 compute-2 systemd-sysv-generator[139899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:13 compute-2 systemd-rc-local-generator[139895]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:13 compute-2 sudo[139865]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:13.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:13.859+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:13 compute-2 ceph-mon[77081]: pgmap v534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:13 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:14 compute-2 sudo[140055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfwhhrosukcxydlayrlkoebbwaxvlutz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089573.7586336-836-158823843672027/AnsiballZ_stat.py'
Jan 22 13:46:14 compute-2 sudo[140055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:14 compute-2 python3.9[140057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:14 compute-2 sudo[140055]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:14 compute-2 sudo[140134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonfvsgdnjpiqgydxjpqwrhoifguqdjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089573.7586336-836-158823843672027/AnsiballZ_file.py'
Jan 22 13:46:14 compute-2 sudo[140134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:14.828+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:14 compute-2 python3.9[140136]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:14.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:14 compute-2 sudo[140134]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:15 compute-2 ceph-mon[77081]: pgmap v535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:15.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:15 compute-2 sudo[140286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzqpnktmzddkrefrjwgywtymdgdcvbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089575.1734219-872-199648648259887/AnsiballZ_stat.py'
Jan 22 13:46:15 compute-2 sudo[140286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:15 compute-2 python3.9[140288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:15 compute-2 sudo[140286]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:15.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:15 compute-2 sudo[140364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptvsmlfvrqprihhgiomsyqrjflkavzvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089575.1734219-872-199648648259887/AnsiballZ_file.py'
Jan 22 13:46:15 compute-2 sudo[140364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:16 compute-2 python3.9[140366]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:16 compute-2 sudo[140364]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:16.792+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:16.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:17 compute-2 sudo[140517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aszdzlxgzqxsdtfobkgdicgcbgmfijdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089576.5520627-908-38668770896658/AnsiballZ_systemd.py'
Jan 22 13:46:17 compute-2 sudo[140517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:17 compute-2 python3.9[140519]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:46:17 compute-2 systemd[1]: Reloading.
Jan 22 13:46:17 compute-2 systemd-rc-local-generator[140545]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:17 compute-2 systemd-sysv-generator[140549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:17.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:17 compute-2 systemd[1]: Starting Create netns directory...
Jan 22 13:46:17 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 13:46:17 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 13:46:17 compute-2 systemd[1]: Finished Create netns directory.
Jan 22 13:46:17 compute-2 sudo[140517]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:17.772+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:18 compute-2 ceph-mon[77081]: pgmap v536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:18 compute-2 sudo[140711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amuofwzpzefdenjhdvimhicbnpivaetd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089578.0926216-939-251523589830640/AnsiballZ_file.py'
Jan 22 13:46:18 compute-2 sudo[140711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:18 compute-2 python3.9[140713]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:18 compute-2 sudo[140711]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:18.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:18.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:19 compute-2 ceph-mon[77081]: pgmap v537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:19 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:19 compute-2 sudo[140863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdfvdbvadcprohairkctdcnpfepemnyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089578.8787742-963-52340102198638/AnsiballZ_stat.py'
Jan 22 13:46:19 compute-2 sudo[140863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:19 compute-2 python3.9[140865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:19 compute-2 sudo[140863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:46:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:19.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:46:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:19.816+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:19 compute-2 sudo[140986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnecqjpfjsjzmcrdzgzamfhdibpvlse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089578.8787742-963-52340102198638/AnsiballZ_copy.py'
Jan 22 13:46:19 compute-2 sudo[140986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:20 compute-2 python3.9[140988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089578.8787742-963-52340102198638/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:20 compute-2 sudo[140986]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:20.787+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:20 compute-2 sudo[141139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exeetwcquhxluuiqkegjxicbwdjxuyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089580.6039946-1013-152752923923162/AnsiballZ_file.py'
Jan 22 13:46:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:20.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:20 compute-2 sudo[141139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:21 compute-2 ceph-mon[77081]: pgmap v538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:21 compute-2 python3.9[141141]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:21 compute-2 sudo[141139]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:21.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:21 compute-2 sudo[141291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vftltjfjvlnrylwtqccgcivfrgbpqadv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089581.4220066-1038-237970681887568/AnsiballZ_file.py'
Jan 22 13:46:21 compute-2 sudo[141291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:21.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:21 compute-2 python3.9[141293]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:46:21 compute-2 sudo[141291]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:22 compute-2 sudo[141444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxzewozoqofpmqerdchpadvzyimfnzkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089582.1962845-1061-61746672393403/AnsiballZ_stat.py'
Jan 22 13:46:22 compute-2 sudo[141444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:22 compute-2 python3.9[141446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:22 compute-2 sudo[141444]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:22.763+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:23 compute-2 sudo[141567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypnyiuzmdoigsunyqpjsqeeononiatxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089582.1962845-1061-61746672393403/AnsiballZ_copy.py'
Jan 22 13:46:23 compute-2 sudo[141567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:23 compute-2 ceph-mon[77081]: pgmap v539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:23 compute-2 python3.9[141569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089582.1962845-1061-61746672393403/.source.json _original_basename=.4ru_1mkh follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:23 compute-2 sudo[141567]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:23.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:23.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:24 compute-2 python3.9[141719]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:24.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:24.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:25 compute-2 ceph-mon[77081]: pgmap v540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:25.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:25.859+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:26 compute-2 sudo[142142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxwcqaljuotqulysebxjzwnpdlklpvhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089585.8490696-1181-138111450301230/AnsiballZ_container_config_data.py'
Jan 22 13:46:26 compute-2 sudo[142142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:26 compute-2 python3.9[142144]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 22 13:46:26 compute-2 sudo[142142]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:26.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:26.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:27 compute-2 ceph-mon[77081]: pgmap v541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:46:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:46:27 compute-2 sudo[142294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwolqqpetfteggbhrldnwihdydgqyos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089587.0592253-1214-203616079138598/AnsiballZ_container_config_hash.py'
Jan 22 13:46:27 compute-2 sudo[142294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:27 compute-2 python3.9[142296]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:46:27 compute-2 sudo[142294]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:27.804+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:28 compute-2 podman[142321]: 2026-01-22 13:46:28.07267083 +0000 UTC m=+0.120251118 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 13:46:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:28 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:28 compute-2 sudo[142473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnyjkvnxeyajyzqctcppqzgwzboejdf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089588.2340598-1244-59489764972102/AnsiballZ_edpm_container_manage.py'
Jan 22 13:46:28 compute-2 sudo[142473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:28.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:28.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:29 compute-2 python3[142475]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:46:29 compute-2 ceph-mon[77081]: pgmap v542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:29.772+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:30.754+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:30.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:30 compute-2 sudo[142540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:30 compute-2 sudo[142540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:30 compute-2 sudo[142540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:31 compute-2 sudo[142565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:31 compute-2 sudo[142565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:31 compute-2 sudo[142565]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:46:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Cumulative writes: 4785 writes, 21K keys, 4785 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4785 writes, 607 syncs, 7.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4785 writes, 21K keys, 4785 commit groups, 1.0 writes per commit group, ingest: 18.18 MB, 0.03 MB/s
                                           Interval WAL: 4785 writes, 607 syncs, 7.88 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 13:46:31 compute-2 ceph-mon[77081]: pgmap v543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:46:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:46:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:31.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:32.723+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:32.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:33 compute-2 ceph-mon[77081]: pgmap v544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:33.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:34.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:35 compute-2 ceph-mon[77081]: pgmap v545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:35.804+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:36.771+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:36.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:37 compute-2 podman[142489]: 2026-01-22 13:46:37.669903157 +0000 UTC m=+8.531817029 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 13:46:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:37.786+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:37 compute-2 podman[142672]: 2026-01-22 13:46:37.834539254 +0000 UTC m=+0.049362003 container create 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 13:46:37 compute-2 podman[142672]: 2026-01-22 13:46:37.810017622 +0000 UTC m=+0.024840381 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 13:46:37 compute-2 python3[142475]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 13:46:37 compute-2 sudo[142473]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:38 compute-2 ceph-mon[77081]: pgmap v546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:38.739+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:38.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:39 compute-2 ceph-mon[77081]: pgmap v547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:39 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:39.716+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:39 compute-2 sudo[142861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmioxtkgmynqfkeuidkdbfmfrrgjbzfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089599.536197-1268-137837132214948/AnsiballZ_stat.py'
Jan 22 13:46:39 compute-2 sudo[142861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:40 compute-2 python3.9[142863]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:46:40 compute-2 sudo[142861]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:40.765+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:40 compute-2 sudo[143016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgvwglgrkapqktlduzqafuegehvywzlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089600.5036755-1295-137407881524509/AnsiballZ_file.py'
Jan 22 13:46:40 compute-2 sudo[143016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:41 compute-2 python3.9[143018]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:41 compute-2 sudo[143016]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:41 compute-2 sudo[143092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahyrtrybxylvctjbwaqhqyunzpdzgjat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089600.5036755-1295-137407881524509/AnsiballZ_stat.py'
Jan 22 13:46:41 compute-2 sudo[143092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:41 compute-2 python3.9[143094]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:46:41 compute-2 sudo[143092]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:46:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:46:41 compute-2 ceph-mon[77081]: pgmap v548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:41.718+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:42 compute-2 sudo[143243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvwasvckxwzeprujhuugbklvybpedvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089601.5686336-1295-31270107419934/AnsiballZ_copy.py'
Jan 22 13:46:42 compute-2 sudo[143243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:42 compute-2 python3.9[143245]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089601.5686336-1295-31270107419934/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:42 compute-2 sudo[143243]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:42 compute-2 sudo[143320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvdllutabdrfhiefzwwhnwlcnhgzkukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089601.5686336-1295-31270107419934/AnsiballZ_systemd.py'
Jan 22 13:46:42 compute-2 sudo[143320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:42 compute-2 ceph-mon[77081]: pgmap v549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:42.755+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:42 compute-2 python3.9[143322]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:46:42 compute-2 systemd[1]: Reloading.
Jan 22 13:46:42 compute-2 systemd-rc-local-generator[143352]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:42.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:42 compute-2 systemd-sysv-generator[143357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:43.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:43 compute-2 sudo[143320]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:44 compute-2 sudo[143432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdtxxttbesedbhlyhrryaqtgxpzkblqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089601.5686336-1295-31270107419934/AnsiballZ_systemd.py'
Jan 22 13:46:44 compute-2 sudo[143432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:44 compute-2 python3.9[143435]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:46:44 compute-2 systemd[1]: Reloading.
Jan 22 13:46:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:44.765+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:44 compute-2 systemd-rc-local-generator[143464]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:44 compute-2 systemd-sysv-generator[143468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:46:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:46:45 compute-2 systemd[1]: Starting ovn_metadata_agent container...
Jan 22 13:46:45 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:46:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b9657b1dcd91b4246a3241bc74c99303fc9f2fa9d335018691a9ddb1987399/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 22 13:46:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b9657b1dcd91b4246a3241bc74c99303fc9f2fa9d335018691a9ddb1987399/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 13:46:45 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d.
Jan 22 13:46:45 compute-2 podman[143476]: 2026-01-22 13:46:45.225844315 +0000 UTC m=+0.154364906 container init 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + sudo -E kolla_set_configs
Jan 22 13:46:45 compute-2 podman[143476]: 2026-01-22 13:46:45.256098949 +0000 UTC m=+0.184619510 container start 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 13:46:45 compute-2 edpm-start-podman-container[143476]: ovn_metadata_agent
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Validating config file
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Copying service configuration files
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Writing out command to execute
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: ++ cat /run_command
Jan 22 13:46:45 compute-2 edpm-start-podman-container[143475]: Creating additional drop-in dependency for "ovn_metadata_agent" (65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d)
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + CMD=neutron-ovn-metadata-agent
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + ARGS=
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + sudo kolla_copy_cacerts
Jan 22 13:46:45 compute-2 podman[143499]: 2026-01-22 13:46:45.341273784 +0000 UTC m=+0.069334614 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 13:46:45 compute-2 systemd[1]: Reloading.
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + [[ ! -n '' ]]
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + . kolla_extend_start
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: Running command: 'neutron-ovn-metadata-agent'
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + umask 0022
Jan 22 13:46:45 compute-2 ovn_metadata_agent[143492]: + exec neutron-ovn-metadata-agent
Jan 22 13:46:45 compute-2 systemd-rc-local-generator[143570]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:45 compute-2 systemd-sysv-generator[143573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:45.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:45 compute-2 ceph-mon[77081]: pgmap v550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:45 compute-2 systemd[1]: Started ovn_metadata_agent container.
Jan 22 13:46:45 compute-2 sudo[143432]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:45.738+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:46.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:46 compute-2 python3.9[143732]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 13:46:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.094 143497 INFO neutron.common.config [-] Logging enabled!
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.094 143497 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.094 143497 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.095 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.095 143497 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.095 143497 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.150 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.162 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c4fa18b6-ed0f-47ac-8eec-d1399749aa8e (UUID: c4fa18b6-ed0f-47ac-8eec-d1399749aa8e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.191 143497 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.192 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.192 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.192 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.197 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.202 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.208 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c4fa18b6-ed0f-47ac-8eec-d1399749aa8e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], external_ids={}, name=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, nb_cfg_timestamp=1769089536027, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.210 143497 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff0fc0dcf70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 INFO oslo_service.service [-] Starting 1 workers
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.215 143497 DEBUG oslo_service.service [-] Started child 143757 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.219 143497 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp405dvk24/privsep.sock']
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.219 143757 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-230623'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.242 143757 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.242 143757 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.243 143757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.246 143757 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.251 143757 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.257 143757 INFO eventlet.wsgi.server [-] (143757) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 22 13:46:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:46:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:46:47 compute-2 ceph-mon[77081]: pgmap v551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:47.766+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:47 compute-2 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 22 13:46:47 compute-2 sudo[143888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnhbkqftrlkfbkarxgzpwtggsfmjqzag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089607.5637686-1431-46749925088289/AnsiballZ_stat.py'
Jan 22 13:46:47 compute-2 sudo[143888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.895 143497 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.896 143497 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp405dvk24/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.774 143856 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.778 143856 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.780 143856 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.780 143856 INFO oslo.privsep.daemon [-] privsep daemon running as pid 143856
Jan 22 13:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.898 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[95d2790d-eaff-43ee-b037-c52c2acd3d99]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 13:46:48 compute-2 python3.9[143890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:46:48 compute-2 sudo[143888]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:48 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:48.469 143856 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:46:48 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:48.469 143856 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:46:48 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:48.470 143856 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:46:48 compute-2 sudo[144018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjympkuvccfdvzdqfrqitglilwhsanl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089607.5637686-1431-46749925088289/AnsiballZ_copy.py'
Jan 22 13:46:48 compute-2 sudo[144018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:48 compute-2 ceph-mon[77081]: pgmap v552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:48 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:48 compute-2 python3.9[144020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089607.5637686-1431-46749925088289/.source.yaml _original_basename=.xq1exs8a follow=False checksum=a7c93daf1344287e5303b3d1648c714a9349cb4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:46:48 compute-2 sudo[144018]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:48.788+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:48.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.180 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8b9dbb-995c-41e0-adbb-ea73c107a937]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.183 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, column=external_ids, values=({'neutron:ovn-metadata-id': '8451296e-09c6-52d3-9638-e3d9fe7a5f53'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.194 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.200 143497 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:46:49 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.243 143497 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:46:49 compute-2 sshd-session[134236]: Connection closed by 192.168.122.30 port 55626
Jan 22 13:46:49 compute-2 sshd-session[134233]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:46:49 compute-2 systemd[1]: session-47.scope: Deactivated successfully.
Jan 22 13:46:49 compute-2 systemd[1]: session-47.scope: Consumed 57.988s CPU time.
Jan 22 13:46:49 compute-2 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Jan 22 13:46:49 compute-2 systemd-logind[787]: Removed session 47.
Jan 22 13:46:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:49.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:49.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:50 compute-2 ceph-mon[77081]: pgmap v553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:50.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:50.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:51 compute-2 sudo[144046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:51 compute-2 sudo[144046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:51 compute-2 sudo[144046]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:51 compute-2 sudo[144071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:51 compute-2 sudo[144071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:51 compute-2 sudo[144071]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:46:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:51.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:46:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:51.815+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:52.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:52 compute-2 sudo[144097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:52 compute-2 sudo[144097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:52 compute-2 sudo[144097]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:52 compute-2 sudo[144122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:46:52 compute-2 sudo[144122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:52 compute-2 sudo[144122]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:52.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:52 compute-2 ceph-mon[77081]: pgmap v554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:52 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:46:53 compute-2 sudo[144147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:46:53 compute-2 sudo[144147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:53 compute-2 sudo[144147]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:53 compute-2 sudo[144172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:46:53 compute-2 sudo[144172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:46:53 compute-2 sudo[144172]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:46:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:53.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:46:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:53.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:46:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:46:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:46:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:46:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:46:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:46:54 compute-2 sshd-session[144229]: Accepted publickey for zuul from 192.168.122.30 port 34248 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:46:54 compute-2 systemd-logind[787]: New session 48 of user zuul.
Jan 22 13:46:54 compute-2 systemd[1]: Started Session 48 of User zuul.
Jan 22 13:46:54 compute-2 sshd-session[144229]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:46:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:54.794+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:46:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:46:55 compute-2 ceph-mon[77081]: pgmap v555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:55 compute-2 python3.9[144382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:46:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:55.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:56 compute-2 sudo[144537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkfjixvzkkxbasryfulptjhwcmnzioiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089616.220284-64-57183847027734/AnsiballZ_command.py'
Jan 22 13:46:56 compute-2 sudo[144537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:46:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 13:46:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 22 13:46:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:56.864+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:56 compute-2 python3.9[144539]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:46:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 13:46:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:56 compute-2 sudo[144537]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 13:46:56 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 13:46:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 22 13:46:57 compute-2 ceph-mon[77081]: pgmap v556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 22 13:46:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:46:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:46:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:57.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:58 compute-2 sudo[144702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlpjyhsjldovrbnqwlujnqgkyvlotaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089617.451963-97-13388322630854/AnsiballZ_systemd_service.py'
Jan 22 13:46:58 compute-2 sudo[144702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:46:58 compute-2 python3.9[144704]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:46:58 compute-2 systemd[1]: Reloading.
Jan 22 13:46:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:58 compute-2 systemd-sysv-generator[144756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:46:58 compute-2 systemd-rc-local-generator[144753]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:46:58 compute-2 podman[144707]: 2026-01-22 13:46:58.538352908 +0000 UTC m=+0.131344026 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 13:46:58 compute-2 sudo[144702]: pam_unix(sudo:session): session closed for user root
Jan 22 13:46:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:58.873+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:46:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:46:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:58.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:46:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:46:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:46:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:46:59 compute-2 python3.9[144917]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:46:59 compute-2 network[144934]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:46:59 compute-2 network[144935]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:46:59 compute-2 network[144936]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:46:59 compute-2 ceph-mon[77081]: pgmap v557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:46:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:46:59 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:00.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:00.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:01 compute-2 sudo[144980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:01 compute-2 sudo[144980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:01 compute-2 sudo[144980]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:01 compute-2 sudo[145009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:47:01 compute-2 sudo[145009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:01 compute-2 sudo[145009]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000058s ======
Jan 22 13:47:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Jan 22 13:47:01 compute-2 ceph-mon[77081]: pgmap v558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 13:47:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:47:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:47:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:01.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:02 compute-2 sshd-session[145062]: Invalid user sol from 92.118.39.95 port 52562
Jan 22 13:47:02 compute-2 sshd-session[145062]: Connection closed by invalid user sol 92.118.39.95 port 52562 [preauth]
Jan 22 13:47:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:02.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:03 compute-2 ceph-mon[77081]: pgmap v559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 22 13:47:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:03.732+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:04.710+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:05 compute-2 ceph-mon[77081]: pgmap v560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 13:47:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:05.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:05.684+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:06 compute-2 sudo[145252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umkpugrhvvmwphgnwqboiitpbonquxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089626.296576-154-155122270769491/AnsiballZ_systemd_service.py'
Jan 22 13:47:06 compute-2 sudo[145252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:06.641+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:06 compute-2 python3.9[145254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:06 compute-2 sudo[145252]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:06.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:07 compute-2 ceph-mon[77081]: pgmap v561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 13:47:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:07.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:07.665+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:07 compute-2 sudo[145405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvvsneizktfjjwoovcgxrbmttvazwwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089627.4607668-154-163463318751028/AnsiballZ_systemd_service.py'
Jan 22 13:47:07 compute-2 sudo[145405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:08 compute-2 python3.9[145407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:08 compute-2 sudo[145405]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:08 compute-2 sudo[145559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanhqstaljivmbbnxbeeqhlrqeqtvvzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089628.2830546-154-279971670093877/AnsiballZ_systemd_service.py'
Jan 22 13:47:08 compute-2 sudo[145559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:08.686+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:08 compute-2 python3.9[145561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:08 compute-2 sudo[145559]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:08.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:09 compute-2 sudo[145712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmewfuztubntgtijtmhjwdaxsabrmpyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089629.1131585-154-278444171779044/AnsiballZ_systemd_service.py'
Jan 22 13:47:09 compute-2 sudo[145712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:09.676+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:10.718+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:10.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:11 compute-2 sudo[145717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:11 compute-2 sudo[145717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:11 compute-2 sudo[145717]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:11 compute-2 sudo[145742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:11 compute-2 sudo[145742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:11 compute-2 sudo[145742]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:11 compute-2 python3.9[145714]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:11 compute-2 sudo[145712]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:11.760+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:11 compute-2 ceph-mon[77081]: pgmap v562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 13:47:12 compute-2 sudo[145917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpavjwtkrnwzvepbjaosmjoknxlupzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089631.7456481-154-241329796070084/AnsiballZ_systemd_service.py'
Jan 22 13:47:12 compute-2 sudo[145917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:12 compute-2 python3.9[145919]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:12 compute-2 sudo[145917]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:12.740+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-2 ceph-mon[77081]: pgmap v563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 13:47:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:12 compute-2 sudo[146071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eafeqvdtvslqobdwbfdexusrxhidqctx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089632.5355265-154-164773771919167/AnsiballZ_systemd_service.py'
Jan 22 13:47:12 compute-2 sudo[146071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:12.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:13 compute-2 python3.9[146073]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:13 compute-2 sudo[146071]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:13 compute-2 sudo[146224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsmyxvahpuxljsjnpuyvurfvixgxtnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089633.2838938-154-60497368990904/AnsiballZ_systemd_service.py'
Jan 22 13:47:13 compute-2 sudo[146224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:13.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:13.715+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:13 compute-2 python3.9[146226]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:47:13 compute-2 sudo[146224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:13 compute-2 ceph-mon[77081]: pgmap v564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 22 13:47:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:13 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 619 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:14 compute-2 sshd-session[146252]: Invalid user sol from 45.148.10.240 port 56582
Jan 22 13:47:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:14.668+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:14 compute-2 sshd-session[146252]: Connection closed by invalid user sol 45.148.10.240 port 56582 [preauth]
Jan 22 13:47:14 compute-2 sudo[146380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxeurobvyhcnjuvqjgblqpjnzydkwvyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089634.3501878-310-182441194375285/AnsiballZ_file.py'
Jan 22 13:47:14 compute-2 sudo[146380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:15 compute-2 python3.9[146382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:15 compute-2 sudo[146380]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:15 compute-2 sudo[146540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilswbhjrcfkjkghapjzoqluxvwidyiwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089635.1743271-310-213507962019735/AnsiballZ_file.py'
Jan 22 13:47:15 compute-2 sudo[146540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:15 compute-2 podman[146506]: 2026-01-22 13:47:15.526253571 +0000 UTC m=+0.060781452 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:47:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:15.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:15.657+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:15 compute-2 python3.9[146548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:15 compute-2 ceph-mon[77081]: pgmap v565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Jan 22 13:47:15 compute-2 sudo[146540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:16 compute-2 sudo[146703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsupdregsjbveofaxkkszbrwdcvmolki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089635.8310287-310-27704209190908/AnsiballZ_file.py'
Jan 22 13:47:16 compute-2 sudo[146703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:16 compute-2 python3.9[146705]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:16 compute-2 sudo[146703]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:16.664+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:16 compute-2 ceph-mon[77081]: pgmap v566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 22 13:47:16 compute-2 sudo[146856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqpsnatwcceakyhqemebgfgxvajegnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089636.497235-310-24349240832959/AnsiballZ_file.py'
Jan 22 13:47:16 compute-2 sudo[146856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:16 compute-2 python3.9[146858]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:16 compute-2 sudo[146856]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:16.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:17 compute-2 sudo[147008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mposjzuwbjcwyghvwqjhxwqinjxewkyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089637.1063879-310-108802735560559/AnsiballZ_file.py'
Jan 22 13:47:17 compute-2 sudo[147008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:17 compute-2 python3.9[147010]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:17 compute-2 sudo[147008]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:17.673+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:18 compute-2 sudo[147160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isficskrvqubcvofnolfclshmcdtlmlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089637.7453136-310-226504311487849/AnsiballZ_file.py'
Jan 22 13:47:18 compute-2 sudo[147160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:18 compute-2 python3.9[147162]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:18 compute-2 sudo[147160]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:18.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:19 compute-2 sudo[147313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhgkaebpkkgmupimsgxjllhwvdidzsyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089638.399655-310-268088250397355/AnsiballZ_file.py'
Jan 22 13:47:19 compute-2 sudo[147313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:19.729+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:19 compute-2 python3.9[147315]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:19 compute-2 sudo[147313]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:19 compute-2 ceph-mon[77081]: pgmap v567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 22 13:47:19 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:20.738+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:20 compute-2 sudo[147466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnakxhdedezqltmorgvgagnseqaagnso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089640.633974-461-185416903539509/AnsiballZ_file.py'
Jan 22 13:47:20 compute-2 sudo[147466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:21 compute-2 python3.9[147468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:21 compute-2 sudo[147466]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:21.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:21.696+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:21 compute-2 sudo[147618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrzqcewcpjcoyorderbqwjhtkkefzaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089641.4986684-461-147446684762157/AnsiballZ_file.py'
Jan 22 13:47:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:21 compute-2 ceph-mon[77081]: pgmap v568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:21 compute-2 sudo[147618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:22 compute-2 python3.9[147620]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:22 compute-2 sudo[147618]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:22 compute-2 sudo[147771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abirxylowywilgqjpysrwkfkxebjygsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089642.2552516-461-200460848738368/AnsiballZ_file.py'
Jan 22 13:47:22 compute-2 sudo[147771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:22.662+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:22 compute-2 python3.9[147773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:22 compute-2 sudo[147771]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:23 compute-2 sudo[147923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neqjuxojhielfroqdpytwndmnhlhwttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089642.9278793-461-37557091425623/AnsiballZ_file.py'
Jan 22 13:47:23 compute-2 sudo[147923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-2 ceph-mon[77081]: pgmap v569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:23 compute-2 python3.9[147925]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:23 compute-2 sudo[147923]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:23.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:23.623+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:23 compute-2 sudo[148075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhpqxaxcwmmqwyzcdgexpccxtsmtydwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089643.6729517-461-154606178697057/AnsiballZ_file.py'
Jan 22 13:47:23 compute-2 sudo[148075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:24 compute-2 python3.9[148077]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:24 compute-2 sudo[148075]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:24.639+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:24 compute-2 sudo[148228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmrmyqgcfgumipqjhjgriccvgktrgzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089644.3887906-461-159817354882232/AnsiballZ_file.py'
Jan 22 13:47:24 compute-2 sudo[148228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:25 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 634 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:25 compute-2 python3.9[148230]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:25 compute-2 sudo[148228]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:25.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:25 compute-2 sudo[148380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kixhzrdpxijilfmewrwuubpavgubpgqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089645.3321831-461-89895235061949/AnsiballZ_file.py'
Jan 22 13:47:25 compute-2 sudo[148380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:25.644+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:25 compute-2 python3.9[148382]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:47:25 compute-2 sudo[148380]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:26 compute-2 ceph-mon[77081]: pgmap v570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:26 compute-2 sudo[148533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsfwggyphvlkxyqxongytrmyzibijdjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089646.17975-614-118039848947715/AnsiballZ_command.py'
Jan 22 13:47:26 compute-2 sudo[148533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:26.663+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:26 compute-2 python3.9[148535]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:26 compute-2 sudo[148533]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:27.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:27.673+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:28 compute-2 python3.9[148687]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:47:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:28.642+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:28 compute-2 ceph-mon[77081]: pgmap v571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:29.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:29 compute-2 sudo[148863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbcajscliwibrtcoahtisfuikavjejdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089648.7351818-668-125342398268349/AnsiballZ_systemd_service.py'
Jan 22 13:47:29 compute-2 sudo[148863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:29 compute-2 podman[148788]: 2026-01-22 13:47:29.094549898 +0000 UTC m=+0.143710754 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 13:47:29 compute-2 python3.9[148865]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:47:29 compute-2 systemd[1]: Reloading.
Jan 22 13:47:29 compute-2 systemd-sysv-generator[148898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:47:29 compute-2 systemd-rc-local-generator[148893]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:47:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:29.618+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:29 compute-2 sudo[148863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:29.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:29 compute-2 ceph-mon[77081]: pgmap v572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 639 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:30 compute-2 sudo[149054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnypcsrosmxmvqgbmqjylaacsuzchexo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089650.1683185-692-91400775119473/AnsiballZ_command.py'
Jan 22 13:47:30 compute-2 sudo[149054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:30.623+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:30 compute-2 python3.9[149056]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:30 compute-2 sudo[149054]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:30 compute-2 ceph-mon[77081]: pgmap v573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:31 compute-2 sudo[149207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeztarpeujxnfkinwbgbgweqyixukias ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089650.9087355-692-2460578150922/AnsiballZ_command.py'
Jan 22 13:47:31 compute-2 sudo[149207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:31 compute-2 python3.9[149209]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:31 compute-2 sudo[149207]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:31 compute-2 sudo[149211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:31 compute-2 sudo[149211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:31 compute-2 sudo[149211]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:31 compute-2 sudo[149260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:31 compute-2 sudo[149260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:31 compute-2 sudo[149260]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:31.651+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:47:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:47:32 compute-2 sudo[149410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxtbcpuitzbkegmewepagpnsmcwycgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089651.5760376-692-201681315286065/AnsiballZ_command.py'
Jan 22 13:47:32 compute-2 sudo[149410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:32.612+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:33.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:33.629+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:33 compute-2 python3.9[149412]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:33 compute-2 sudo[149410]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:34 compute-2 sudo[149564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pndmajntpjbjvewslfmvjmckfomsxezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089654.019816-692-139534550544475/AnsiballZ_command.py'
Jan 22 13:47:34 compute-2 sudo[149564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:34 compute-2 python3.9[149567]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:34 compute-2 sudo[149564]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:34.627+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:35.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:35 compute-2 sudo[149718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfmpsteymnfhadgwhuzzdegjewptbbhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089654.685136-692-168405722287019/AnsiballZ_command.py'
Jan 22 13:47:35 compute-2 sudo[149718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-2 ceph-mon[77081]: pgmap v574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-2 ceph-mon[77081]: pgmap v575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:35 compute-2 python3.9[149720]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:35 compute-2 sudo[149718]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:35.621+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:35.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:35 compute-2 sudo[149871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytothvmuvzrbojhsnpcgccvtjbsgeqqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089655.650401-692-193093040055034/AnsiballZ_command.py'
Jan 22 13:47:35 compute-2 sudo[149871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:36 compute-2 python3.9[149873]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:36 compute-2 sudo[149871]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:36.668+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:36 compute-2 sudo[150025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaocsfhphrkyshusqcinwdrerzsjufrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089656.4009976-692-161781182010283/AnsiballZ_command.py'
Jan 22 13:47:36 compute-2 sudo[150025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-2 python3.9[150027]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:47:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:37.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:37 compute-2 sudo[150025]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:37.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:37.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-2 ceph-mon[77081]: pgmap v576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:37 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 644 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:38 compute-2 sudo[150178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfeuftczjxrwroiulnvazfttzdxghxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089657.7382586-854-244285231885833/AnsiballZ_getent.py'
Jan 22 13:47:38 compute-2 sudo[150178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:38 compute-2 python3.9[150180]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 22 13:47:38 compute-2 sudo[150178]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:38.743+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:38 compute-2 ceph-mon[77081]: pgmap v577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:39.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:39 compute-2 sudo[150332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chrmyuaojmpgpildtgcmqtjzluyklruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089658.6732552-878-181348032619883/AnsiballZ_group.py'
Jan 22 13:47:39 compute-2 sudo[150332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:39 compute-2 python3.9[150334]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:47:39 compute-2 groupadd[150335]: group added to /etc/group: name=libvirt, GID=42473
Jan 22 13:47:39 compute-2 groupadd[150335]: group added to /etc/gshadow: name=libvirt
Jan 22 13:47:39 compute-2 groupadd[150335]: new group: name=libvirt, GID=42473
Jan 22 13:47:39 compute-2 sudo[150332]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:39.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:40 compute-2 sudo[150490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hafzfaxofnwaskpljpbaqufuoilurrup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089659.835867-901-269602089577695/AnsiballZ_user.py'
Jan 22 13:47:40 compute-2 sudo[150490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:40 compute-2 python3.9[150493]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:47:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:40.754+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:41.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:41 compute-2 useradd[150495]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 22 13:47:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:41.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:41.799+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:41 compute-2 sudo[150490]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:41 compute-2 ceph-mon[77081]: pgmap v578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:42 compute-2 sudo[150652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erbagbptpggdmssxnvewcdctkkmpxjam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089662.2897782-934-90896886303086/AnsiballZ_setup.py'
Jan 22 13:47:42 compute-2 sudo[150652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:42.824+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:42 compute-2 ceph-mon[77081]: pgmap v579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:42 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:42 compute-2 python3.9[150654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:47:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:43.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:43 compute-2 sudo[150652]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:43 compute-2 sudo[150736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldlqdalrknvgdiostilhjozlyvbihmrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089662.2897782-934-90896886303086/AnsiballZ_dnf.py'
Jan 22 13:47:43 compute-2 sudo[150736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:47:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:43.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:43 compute-2 python3.9[150738]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:47:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:43.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:44.815+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:45.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:45 compute-2 ceph-mon[77081]: pgmap v580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:45.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:45.780+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:46 compute-2 podman[150747]: 2026-01-22 13:47:46.026183489 +0000 UTC m=+0.077113675 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 22 13:47:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:46.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:47.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:47:47.152 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:47:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:47:47.153 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:47:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:47:47.153 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:47:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:47 compute-2 ceph-mon[77081]: pgmap v581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:47.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:47.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:48.750+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:49.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:49 compute-2 ceph-mon[77081]: pgmap v582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:49 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:49.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:50.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:51 compute-2 sudo[150773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:51 compute-2 sudo[150773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:51 compute-2 sudo[150773]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:51 compute-2 sudo[150798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:47:51 compute-2 sudo[150798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:47:51 compute-2 sudo[150798]: pam_unix(sudo:session): session closed for user root
Jan 22 13:47:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:51.731+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:51 compute-2 ceph-mon[77081]: pgmap v583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:52.778+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:53.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:53 compute-2 ceph-mon[77081]: pgmap v584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:53.800+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:54.805+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:47:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:55.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:47:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:55.823+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:56 compute-2 ceph-mon[77081]: pgmap v585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:56.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:47:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:57.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:47:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:57 compute-2 ceph-mon[77081]: pgmap v586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:57.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:58.832+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:58 compute-2 ceph-mon[77081]: pgmap v587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:47:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:47:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:47:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:47:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:59.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:47:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:47:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:47:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:47:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:59.789+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:47:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:47:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:00 compute-2 podman[150999]: 2026-01-22 13:48:00.059455635 +0000 UTC m=+0.124432543 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 13:48:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:00.754+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:00 compute-2 ceph-mon[77081]: pgmap v588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:01.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:01 compute-2 sudo[151027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:01 compute-2 sudo[151027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-2 sudo[151027]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:01 compute-2 sudo[151052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:48:01 compute-2 sudo[151052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-2 sudo[151052]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:01 compute-2 sudo[151077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:01 compute-2 sudo[151077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-2 sudo[151077]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:01.711+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:01 compute-2 sudo[151102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:48:01 compute-2 sudo[151102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:02 compute-2 sudo[151102]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:02.732+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:03.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:03 compute-2 ceph-mon[77081]: pgmap v589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:48:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:48:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:48:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:48:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:48:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:48:03 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:03.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:04.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:05.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:05 compute-2 ceph-mon[77081]: pgmap v590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:05.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:05.793+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:06.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:07 compute-2 ceph-mon[77081]: pgmap v591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:07.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:07.789+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:08.759+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:09.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:09 compute-2 ceph-mon[77081]: pgmap v592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:09 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:09.765+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:10.729+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:11.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:11.715+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:12.674+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 13:48:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:13.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 13:48:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:13.707+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:13.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:14.724+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:15.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:15 compute-2 ceph-mds[81154]: mds.beacon.cephfs.compute-2.zycvef missed beacon ack from the monitors
Jan 22 13:48:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:15.675+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:15.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:15 compute-2 ceph-mon[77081]: pgmap v593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:16 compute-2 sudo[151176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:16 compute-2 sudo[151176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:16 compute-2 sudo[151176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:16 compute-2 sudo[151207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:16 compute-2 podman[151200]: 2026-01-22 13:48:16.201347653 +0000 UTC m=+0.051722933 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 13:48:16 compute-2 sudo[151207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:16 compute-2 sudo[151207]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:16 compute-2 kernel: SELinux:  Converting 2777 SID table entries...
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:48:16 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:48:16 compute-2 sudo[151247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:16 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 22 13:48:16 compute-2 sudo[151247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:16 compute-2 sudo[151247]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:16 compute-2 sudo[151272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:48:16 compute-2 sudo[151272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:16 compute-2 sudo[151272]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:16.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 13:48:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:17.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: pgmap v594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: pgmap v595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 ceph-mon[77081]: pgmap v596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:48:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:17.716+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:17.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:18.708+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:19.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:19.661+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:19 compute-2 ceph-mon[77081]: pgmap v597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:19.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:20.637+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:21 compute-2 ceph-mon[77081]: pgmap v598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:21.635+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:21.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:22 compute-2 sshd-session[151300]: Connection closed by authenticating user ftp 69.12.83.184 port 38744 [preauth]
Jan 22 13:48:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:22.637+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:23 compute-2 ceph-mon[77081]: pgmap v599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:23.597+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:24.559+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 13:48:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:25.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 13:48:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-2 ceph-mon[77081]: pgmap v600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:25.538+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:25 compute-2 kernel: SELinux:  Converting 2777 SID table entries...
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:48:25 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:48:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:26.546+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:27.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:27 compute-2 ceph-mon[77081]: pgmap v601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:27.511+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:27.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:28.560+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:29.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:29.550+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:29 compute-2 ceph-mon[77081]: pgmap v602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:29.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:30.510+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:30 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 22 13:48:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:31 compute-2 podman[151317]: 2026-01-22 13:48:31.071463082 +0000 UTC m=+0.110263509 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:48:31 compute-2 sshd-session[151315]: Invalid user vpn from 69.12.83.184 port 38828
Jan 22 13:48:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:31.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:31 compute-2 sshd-session[151315]: Connection closed by invalid user vpn 69.12.83.184 port 38828 [preauth]
Jan 22 13:48:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:31.507+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:31 compute-2 ceph-mon[77081]: pgmap v603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:32.549+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:32 compute-2 ceph-mon[77081]: pgmap v604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:33.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:33.579+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:34.585+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 13:48:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:35.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 13:48:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:35 compute-2 ceph-mon[77081]: pgmap v605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:35.578+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:36 compute-2 sudo[151345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:36 compute-2 sudo[151345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:36 compute-2 sudo[151345]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:36 compute-2 sudo[151371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:36 compute-2 sudo[151371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:36 compute-2 sudo[151371]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:36.588+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:37.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:37.562+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:37 compute-2 ceph-mon[77081]: pgmap v606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:38.565+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:38 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:39.571+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:39 compute-2 ceph-mon[77081]: pgmap v607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:40.580+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:40 compute-2 ceph-mon[77081]: pgmap v608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:41.571+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:42.548+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.876545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722876609, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3098, "num_deletes": 507, "total_data_size": 5956441, "memory_usage": 6060048, "flush_reason": "Manual Compaction"}
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722913991, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3880836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12661, "largest_seqno": 15754, "table_properties": {"data_size": 3869661, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 30436, "raw_average_key_size": 20, "raw_value_size": 3843024, "raw_average_value_size": 2563, "num_data_blocks": 276, "num_entries": 1499, "num_filter_entries": 1499, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089508, "oldest_key_time": 1769089508, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 37584 microseconds, and 9893 cpu microseconds.
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.914140) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3880836 bytes OK
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.914185) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.916323) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.916342) EVENT_LOG_v1 {"time_micros": 1769089722916337, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.916362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5941628, prev total WAL file size 5941628, number of live WAL files 2.
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.918010) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3789KB)], [24(8116KB)]
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722918071, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12192422, "oldest_snapshot_seqno": -1}
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 5025 keys, 10032301 bytes, temperature: kUnknown
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722994495, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 10032301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9996709, "index_size": 21914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125757, "raw_average_key_size": 25, "raw_value_size": 9903583, "raw_average_value_size": 1970, "num_data_blocks": 912, "num_entries": 5025, "num_filter_entries": 5025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.994856) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10032301 bytes
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.996563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.3 rd, 131.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6056, records dropped: 1031 output_compression: NoCompression
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.996595) EVENT_LOG_v1 {"time_micros": 1769089722996580, "job": 12, "event": "compaction_finished", "compaction_time_micros": 76527, "compaction_time_cpu_micros": 21766, "output_level": 6, "num_output_files": 1, "total_output_size": 10032301, "num_input_records": 6056, "num_output_records": 5025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:48:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722998259, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089723001212, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.917913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:48:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:43.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:43 compute-2 ceph-mon[77081]: pgmap v609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:43.545+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:43.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:44.519+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:45.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:45 compute-2 ceph-mon[77081]: pgmap v610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:45.529+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:46.500+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:47 compute-2 podman[157261]: 2026-01-22 13:48:47.003243318 +0000 UTC m=+0.059121756 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:48:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:48:47.153 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:48:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:48:47.154 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:48:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:48:47.154 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:48:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:47.530+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:48 compute-2 ceph-mon[77081]: pgmap v611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:48 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:48.528+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:49.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:49 compute-2 ceph-mon[77081]: pgmap v612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:49 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:49.549+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:50.523+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 22 13:48:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:51.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 22 13:48:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:51.517+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:51 compute-2 ceph-mon[77081]: pgmap v613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:52.519+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:53.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-2 ceph-mon[77081]: pgmap v614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:53.543+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:54.516+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:55 compute-2 ceph-mon[77081]: pgmap v615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:55.508+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:48:56 compute-2 sudo[163182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:56.625+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:56 compute-2 sudo[163182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:56 compute-2 sudo[163182]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:56 compute-2 sudo[163248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:48:56 compute-2 sudo[163248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:48:56 compute-2 sudo[163248]: pam_unix(sudo:session): session closed for user root
Jan 22 13:48:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 13:48:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:57.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 13:48:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:57.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:57.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:57 compute-2 ceph-mon[77081]: pgmap v616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:58.681+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:58 compute-2 ceph-mon[77081]: pgmap v617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:48:58 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:48:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:48:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:59.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:48:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:59.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:48:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:48:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:48:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:48:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:48:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:00.696+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:01 compute-2 ceph-mon[77081]: pgmap v618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:01.648+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:01 compute-2 anacron[8202]: Job `cron.monthly' started
Jan 22 13:49:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:01 compute-2 anacron[8202]: Job `cron.monthly' terminated
Jan 22 13:49:01 compute-2 anacron[8202]: Normal exit (3 jobs run)
Jan 22 13:49:02 compute-2 podman[167001]: 2026-01-22 13:49:02.07928568 +0000 UTC m=+0.120599294 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 13:49:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:02.603+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:03.618+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-2 ceph-mon[77081]: pgmap v619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:03 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:03.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:04.608+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:05.578+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:05 compute-2 ceph-mon[77081]: pgmap v620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:06.622+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:07 compute-2 ceph-mon[77081]: pgmap v621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:07.594+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:08.599+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:08 compute-2 sshd-session[168377]: Invalid user sol from 92.118.39.95 port 59746
Jan 22 13:49:09 compute-2 sshd-session[168377]: Connection closed by invalid user sol 92.118.39.95 port 59746 [preauth]
Jan 22 13:49:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:09.616+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:09 compute-2 ceph-mon[77081]: pgmap v622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:09 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:10.644+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:11.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:11.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:12.638+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:12 compute-2 ceph-mon[77081]: pgmap v623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:13.673+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:14 compute-2 ceph-mon[77081]: pgmap v624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:14.686+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:15 compute-2 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 22 13:49:15 compute-2 irqbalance[785]: IRQ 26 affinity is now unmanaged
Jan 22 13:49:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:15 compute-2 ceph-mon[77081]: pgmap v625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:15.717+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:15.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:16 compute-2 sudo[168388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:16 compute-2 sudo[168388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-2 sudo[168388]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-2 sudo[168413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:49:16 compute-2 sudo[168413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-2 sudo[168413]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:16.710+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:16 compute-2 sudo[168438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:16 compute-2 sudo[168438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-2 sudo[168438]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-2 sudo[168441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:16 compute-2 sudo[168441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-2 sudo[168441]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-2 sudo[168488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:16 compute-2 sudo[168488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:16 compute-2 sudo[168488]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:16 compute-2 sudo[168491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:49:16 compute-2 sudo[168491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:17 compute-2 sudo[168491]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:17.718+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:17.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:17 compute-2 podman[168572]: 2026-01-22 13:49:17.997366316 +0000 UTC m=+0.053529071 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 13:49:18 compute-2 kernel: SELinux:  Converting 2778 SID table entries...
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability open_perms=1
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability always_check_network=0
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 13:49:18 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 13:49:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:18.741+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-2 ceph-mon[77081]: pgmap v626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:49:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:49:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:49:19 compute-2 ceph-mon[77081]: pgmap v627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:19 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:19 compute-2 groupadd[168601]: group added to /etc/group: name=dnsmasq, GID=993
Jan 22 13:49:19 compute-2 groupadd[168601]: group added to /etc/gshadow: name=dnsmasq
Jan 22 13:49:19 compute-2 groupadd[168601]: new group: name=dnsmasq, GID=993
Jan 22 13:49:19 compute-2 useradd[168608]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 22 13:49:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:19.725+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:19 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:49:19 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 22 13:49:19 compute-2 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 13:49:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:19.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:20 compute-2 groupadd[168622]: group added to /etc/group: name=clevis, GID=992
Jan 22 13:49:20 compute-2 groupadd[168622]: group added to /etc/gshadow: name=clevis
Jan 22 13:49:20 compute-2 groupadd[168622]: new group: name=clevis, GID=992
Jan 22 13:49:20 compute-2 useradd[168629]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 22 13:49:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:20.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:20 compute-2 usermod[168639]: add 'clevis' to group 'tss'
Jan 22 13:49:20 compute-2 usermod[168639]: add 'clevis' to shadow group 'tss'
Jan 22 13:49:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:21.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:21.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:22.814+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:23.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:23.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000037s ======
Jan 22 13:49:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 22 13:49:24 compute-2 polkitd[43481]: Reloading rules
Jan 22 13:49:24 compute-2 polkitd[43481]: Collecting garbage unconditionally...
Jan 22 13:49:24 compute-2 polkitd[43481]: Loading rules from directory /etc/polkit-1/rules.d
Jan 22 13:49:24 compute-2 polkitd[43481]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 22 13:49:24 compute-2 polkitd[43481]: Finished loading, compiling and executing 3 rules
Jan 22 13:49:24 compute-2 polkitd[43481]: Reloading rules
Jan 22 13:49:24 compute-2 polkitd[43481]: Collecting garbage unconditionally...
Jan 22 13:49:24 compute-2 polkitd[43481]: Loading rules from directory /etc/polkit-1/rules.d
Jan 22 13:49:24 compute-2 polkitd[43481]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 22 13:49:24 compute-2 polkitd[43481]: Finished loading, compiling and executing 3 rules
Jan 22 13:49:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-2 ceph-mon[77081]: pgmap v628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-2 ceph-mon[77081]: pgmap v629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:24 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:24.819+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:25.812+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:26.789+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:27 compute-2 groupadd[168832]: group added to /etc/group: name=ceph, GID=167
Jan 22 13:49:27 compute-2 groupadd[168832]: group added to /etc/gshadow: name=ceph
Jan 22 13:49:27 compute-2 groupadd[168832]: new group: name=ceph, GID=167
Jan 22 13:49:27 compute-2 useradd[168838]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 22 13:49:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:27.766+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:27.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:28 compute-2 ceph-mon[77081]: pgmap v630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:28 compute-2 sshd-session[168845]: Invalid user sol from 45.148.10.240 port 59946
Jan 22 13:49:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:28.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:28 compute-2 sshd-session[168845]: Connection closed by invalid user sol 45.148.10.240 port 59946 [preauth]
Jan 22 13:49:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:29.735+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:29.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-2 ceph-mon[77081]: pgmap v631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:29 compute-2 ceph-mon[77081]: pgmap v632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:30.759+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:31 compute-2 sshd[1003]: Received signal 15; terminating.
Jan 22 13:49:31 compute-2 systemd[1]: Stopping OpenSSH server daemon...
Jan 22 13:49:31 compute-2 systemd[1]: sshd.service: Deactivated successfully.
Jan 22 13:49:31 compute-2 systemd[1]: Stopped OpenSSH server daemon.
Jan 22 13:49:31 compute-2 systemd[1]: sshd.service: Consumed 3.916s CPU time, read 32.0K from disk, written 132.0K to disk.
Jan 22 13:49:31 compute-2 systemd[1]: Stopped target sshd-keygen.target.
Jan 22 13:49:31 compute-2 systemd[1]: Stopping sshd-keygen.target...
Jan 22 13:49:31 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 13:49:31 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 13:49:31 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 13:49:31 compute-2 systemd[1]: Reached target sshd-keygen.target.
Jan 22 13:49:31 compute-2 systemd[1]: Starting OpenSSH server daemon...
Jan 22 13:49:31 compute-2 sshd[169467]: Server listening on 0.0.0.0 port 22.
Jan 22 13:49:31 compute-2 sshd[169467]: Server listening on :: port 22.
Jan 22 13:49:31 compute-2 systemd[1]: Started OpenSSH server daemon.
Jan 22 13:49:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:31 compute-2 ceph-mon[77081]: pgmap v633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:31.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:31.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:32 compute-2 podman[169583]: 2026-01-22 13:49:32.196003457 +0000 UTC m=+0.071787665 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 22 13:49:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:32.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:33 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:49:33 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:49:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:33 compute-2 ceph-mon[77081]: pgmap v634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:33.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:33 compute-2 systemd[1]: Reloading.
Jan 22 13:49:33 compute-2 systemd-rc-local-generator[169752]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:33 compute-2 systemd-sysv-generator[169757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:33 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:49:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:33.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:33.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:34.809+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:35.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:35.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:35.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:36 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:36.813+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:36 compute-2 sudo[173799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:36 compute-2 sudo[173799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:36 compute-2 sudo[173799]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-2 sudo[173968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:37 compute-2 sudo[173968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:37 compute-2 sudo[173968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:37 compute-2 sudo[174344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:37 compute-2 sudo[174344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:37 compute-2 sudo[174344]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-2 sudo[174436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:49:37 compute-2 sudo[174436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:37 compute-2 sudo[174436]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-2 sudo[150736]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-2 ceph-mon[77081]: pgmap v635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-2 ceph-mon[77081]: pgmap v636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:49:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:49:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:37.787+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:37.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:38.750+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:39 compute-2 ceph-mon[77081]: pgmap v637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:39 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:39.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:40.676+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:41.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:41 compute-2 ceph-mon[77081]: pgmap v638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:41.706+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:41 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:49:41 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:49:41 compute-2 systemd[1]: man-db-cache-update.service: Consumed 11.231s CPU time.
Jan 22 13:49:41 compute-2 systemd[1]: run-r34f66493f05c4848ac19fbbbaa195fd1.service: Deactivated successfully.
Jan 22 13:49:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:42.724+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:42 compute-2 ceph-mon[77081]: pgmap v639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 13:49:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 13:49:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:43.694+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:44.733+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:44 compute-2 ceph-mon[77081]: pgmap v640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:45.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:46.735+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:47.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:49:47.155 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:49:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:49:47.155 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:49:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:49:47.155 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:49:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:47.740+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:47.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:48.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:48 compute-2 podman[178264]: 2026-01-22 13:49:48.989634285 +0000 UTC m=+0.051805048 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 13:49:49 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:49 compute-2 ceph-mon[77081]: pgmap v641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:49.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:50 compute-2 ceph-mon[77081]: pgmap v642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:50 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:50.788+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:51 compute-2 sudo[178410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omosqswxsiyysmlfvjrrqrrattlfeqzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089790.9391267-970-12321436237807/AnsiballZ_systemd.py'
Jan 22 13:49:51 compute-2 sudo[178410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:51 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:51 compute-2 ceph-mon[77081]: pgmap v643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:51.819+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:51 compute-2 python3.9[178412]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:51 compute-2 systemd[1]: Reloading.
Jan 22 13:49:51 compute-2 systemd-rc-local-generator[178441]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:51 compute-2 systemd-sysv-generator[178444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:52 compute-2 sudo[178410]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:52 compute-2 sudo[178602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kltvkiafoffztnpxveypajruxutkiamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089792.4033923-970-252596678724840/AnsiballZ_systemd.py'
Jan 22 13:49:52 compute-2 sudo[178602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:52.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:53 compute-2 python3.9[178604]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:53 compute-2 systemd[1]: Reloading.
Jan 22 13:49:53 compute-2 systemd-rc-local-generator[178635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:53 compute-2 systemd-sysv-generator[178638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:53 compute-2 sudo[178602]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:53 compute-2 sudo[178793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfrgxtzrbwnhkwvlcbvrjinagnpvjmsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089793.4734023-970-228773988474028/AnsiballZ_systemd.py'
Jan 22 13:49:53 compute-2 sudo[178793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:53.780+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:54 compute-2 python3.9[178795]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:54 compute-2 systemd[1]: Reloading.
Jan 22 13:49:54 compute-2 systemd-rc-local-generator[178825]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:54 compute-2 systemd-sysv-generator[178828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:54 compute-2 ceph-mon[77081]: pgmap v644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:54 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:54 compute-2 sudo[178793]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:54.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:54 compute-2 sudo[178984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkmjjtykbxsprphadlrhzzkesyikjwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089794.551165-970-58033927964299/AnsiballZ_systemd.py'
Jan 22 13:49:54 compute-2 sudo[178984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:55 compute-2 python3.9[178986]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:49:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:55.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:55 compute-2 systemd[1]: Reloading.
Jan 22 13:49:55 compute-2 systemd-rc-local-generator[179015]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:55 compute-2 systemd-sysv-generator[179020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:55 compute-2 ceph-mon[77081]: pgmap v645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:55 compute-2 sudo[178984]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:55.816+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:49:56 compute-2 sudo[179175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccumzsgiwvhuozkggyaxaftqmtlbelgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089796.5440116-1058-62074378989735/AnsiballZ_systemd.py'
Jan 22 13:49:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:56.848+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:56 compute-2 sudo[179175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:57 compute-2 sudo[179178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:57 compute-2 sudo[179178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:57 compute-2 sudo[179178]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:57.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:57 compute-2 python3.9[179177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:49:57 compute-2 sudo[179203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:49:57 compute-2 sudo[179203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:49:57 compute-2 sudo[179203]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:57 compute-2 systemd[1]: Reloading.
Jan 22 13:49:57 compute-2 systemd-rc-local-generator[179254]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:57 compute-2 systemd-sysv-generator[179259]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:57 compute-2 ceph-mon[77081]: pgmap v646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:57 compute-2 sudo[179175]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:57.825+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:49:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:57.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:49:58 compute-2 sudo[179414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kivhjxrjbabhsvdbmlnebizhrqqcltxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089797.7444088-1058-214818984012288/AnsiballZ_systemd.py'
Jan 22 13:49:58 compute-2 sudo[179414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:58 compute-2 python3.9[179416]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:49:58 compute-2 systemd[1]: Reloading.
Jan 22 13:49:58 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:58 compute-2 systemd-rc-local-generator[179448]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:58 compute-2 systemd-sysv-generator[179451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:58 compute-2 sudo[179414]: pam_unix(sudo:session): session closed for user root
Jan 22 13:49:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:58.826+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:49:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:49:59 compute-2 sudo[179605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqucifkzhdwpcqbrqntrfytdpjhddcsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089798.9939506-1058-91331308037290/AnsiballZ_systemd.py'
Jan 22 13:49:59 compute-2 sudo[179605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:49:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:59 compute-2 ceph-mon[77081]: pgmap v647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:49:59 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:49:59 compute-2 python3.9[179607]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:49:59 compute-2 systemd[1]: Reloading.
Jan 22 13:49:59 compute-2 systemd-rc-local-generator[179634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:49:59 compute-2 systemd-sysv-generator[179638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:49:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:59.868+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:49:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:49:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:49:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:49:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:49:59 compute-2 sudo[179605]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:00 compute-2 sudo[179795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-novdccbcebimltvsfbxfgcgtxyappcso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089800.0907142-1058-216671763595867/AnsiballZ_systemd.py'
Jan 22 13:50:00 compute-2 sudo[179795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:00 compute-2 python3.9[179797]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 13:50:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 13:50:00 compute-2 sudo[179795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:00.869+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:01 compute-2 sudo[179950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysifcdxvvchrlkboufpbbozpymwyyohi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089800.8680186-1058-220107660730543/AnsiballZ_systemd.py'
Jan 22 13:50:01 compute-2 sudo[179950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:01 compute-2 python3.9[179952]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:01 compute-2 systemd[1]: Reloading.
Jan 22 13:50:01 compute-2 systemd-rc-local-generator[179981]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:50:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:01 compute-2 systemd-sysv-generator[179986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:50:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:01 compute-2 ceph-mon[77081]: pgmap v648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:01 compute-2 sudo[179950]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:01.868+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:01.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:02.917+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:02 compute-2 ceph-mon[77081]: pgmap v649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:03 compute-2 podman[180016]: 2026-01-22 13:50:03.040239119 +0000 UTC m=+0.098199829 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 13:50:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:03.880+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:04.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:04 compute-2 sudo[180168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weldswmjctxwzhmtjtocjkupqchztsuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089804.5865657-1166-19512270209938/AnsiballZ_systemd.py'
Jan 22 13:50:04 compute-2 sudo[180168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:05 compute-2 python3.9[180170]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 13:50:05 compute-2 systemd[1]: Reloading.
Jan 22 13:50:05 compute-2 systemd-rc-local-generator[180199]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:50:05 compute-2 systemd-sysv-generator[180205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:50:05 compute-2 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 22 13:50:05 compute-2 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 22 13:50:05 compute-2 sudo[180168]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:05 compute-2 ceph-mon[77081]: pgmap v650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:05.882+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:05.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:06 compute-2 sudo[180362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfitpnddflvdxsuvpgxpmzyxrizfecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089805.9382656-1190-245763860991137/AnsiballZ_systemd.py'
Jan 22 13:50:06 compute-2 sudo[180362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:06 compute-2 python3.9[180364]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:06 compute-2 sudo[180362]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:06.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:07 compute-2 sudo[180518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbdgzdluwemjlavxztkwatpeiaqzvyde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089806.784324-1190-105344932705283/AnsiballZ_systemd.py'
Jan 22 13:50:07 compute-2 sudo[180518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:07 compute-2 python3.9[180520]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:07 compute-2 sudo[180518]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:07 compute-2 ceph-mon[77081]: pgmap v651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:07.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:07 compute-2 sudo[180673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsnmhoaoffkieiqogxglaoxqchlgnyxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089807.5802848-1190-200242524643504/AnsiballZ_systemd.py'
Jan 22 13:50:07 compute-2 sudo[180673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:08 compute-2 python3.9[180675]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:08 compute-2 sudo[180673]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:08 compute-2 ceph-mon[77081]: pgmap v652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:08 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:08 compute-2 sudo[180829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzeopmmzjdzwhtzzeucsqknokrakbdvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089808.462836-1190-69966033916753/AnsiballZ_systemd.py'
Jan 22 13:50:08 compute-2 sudo[180829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:08.802+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:09 compute-2 python3.9[180831]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:09 compute-2 sudo[180829]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:09 compute-2 sudo[180984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtqveffhmyqgybnohglzqcfpjoigrmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089809.32334-1190-25873365205458/AnsiballZ_systemd.py'
Jan 22 13:50:09 compute-2 sudo[180984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:09.804+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:09 compute-2 python3.9[180986]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:10 compute-2 sudo[180984]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:10 compute-2 sudo[181140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cztlosjxapibqhiudmsjrklfxlvhkeov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089810.1585152-1190-43878795220441/AnsiballZ_systemd.py'
Jan 22 13:50:10 compute-2 sudo[181140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:10 compute-2 python3.9[181142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:10 compute-2 ceph-mon[77081]: pgmap v653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:10.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:10 compute-2 sudo[181140]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:11 compute-2 sudo[181295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itmdbnhyqzuojfvkzmvqsgiwywucjscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089810.9792633-1190-1359779830125/AnsiballZ_systemd.py'
Jan 22 13:50:11 compute-2 sudo[181295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:11 compute-2 python3.9[181297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:11 compute-2 sudo[181295]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:11.792+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:12 compute-2 sudo[181450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qopacrfefitlhcdwobxugqrxqhcvgloj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089811.7333674-1190-210435345738803/AnsiballZ_systemd.py'
Jan 22 13:50:12 compute-2 sudo[181450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:12 compute-2 python3.9[181452]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:12 compute-2 sudo[181450]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:12.834+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:12 compute-2 sudo[181606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napdsegrwehumfyxfkhdczdcmoucaywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089812.533455-1190-17033370636594/AnsiballZ_systemd.py'
Jan 22 13:50:12 compute-2 sudo[181606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:12 compute-2 ceph-mon[77081]: pgmap v654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:13 compute-2 python3.9[181608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:13 compute-2 sudo[181606]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:13 compute-2 sudo[181761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oliovyujowlicrbvgtbhvsvvergazemj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089813.380452-1190-110009155373469/AnsiballZ_systemd.py'
Jan 22 13:50:13 compute-2 sudo[181761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:13.793+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:13 compute-2 python3.9[181763]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:14 compute-2 sudo[181761]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:14 compute-2 sudo[181917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwwpxcdjepjrjvbjhxdnowyntrzgrkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089814.2059271-1190-12384247036411/AnsiballZ_systemd.py'
Jan 22 13:50:14 compute-2 sudo[181917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:14 compute-2 python3.9[181919]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:14.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:14 compute-2 sudo[181917]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:15 compute-2 sudo[182072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pshjoaekcadbcdaqwbxvnrnlivewapdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089814.9645717-1190-262744984794988/AnsiballZ_systemd.py'
Jan 22 13:50:15 compute-2 sudo[182072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:15 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 804 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:15.723+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:15 compute-2 python3.9[182074]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:15 compute-2 sudo[182072]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:16 compute-2 sudo[182227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzazuwjhjdqqsuokixvgahxnvelohyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089815.9233718-1190-225623760786257/AnsiballZ_systemd.py'
Jan 22 13:50:16 compute-2 sudo[182227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:16 compute-2 python3.9[182229]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:16 compute-2 ceph-mon[77081]: pgmap v655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:16 compute-2 sudo[182227]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:16.705+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:17 compute-2 sudo[182383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsiacyymqfjsttgtfnyccpzjprgwilju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089816.7284489-1190-2514683081/AnsiballZ_systemd.py'
Jan 22 13:50:17 compute-2 sudo[182383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:17.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:17 compute-2 sudo[182386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:17 compute-2 sudo[182386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:17 compute-2 sudo[182386]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:17 compute-2 python3.9[182385]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 13:50:17 compute-2 sudo[182411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:17 compute-2 sudo[182411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:17 compute-2 sudo[182411]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:17 compute-2 sudo[182383]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:17 compute-2 ceph-mon[77081]: pgmap v656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:17.748+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:18.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:19 compute-2 sudo[182602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkykojudvfvoiocisjrkonhobvvcpvyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089818.8170385-1495-138829186300630/AnsiballZ_file.py'
Jan 22 13:50:19 compute-2 sudo[182602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:19 compute-2 podman[182563]: 2026-01-22 13:50:19.090546302 +0000 UTC m=+0.044708126 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 13:50:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:19.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:19 compute-2 python3.9[182610]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:19 compute-2 sudo[182602]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:19.729+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:19 compute-2 sudo[182760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvzrmnyhmnzqglinrddidvujpbalbeep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089819.4471927-1495-146548607772767/AnsiballZ_file.py'
Jan 22 13:50:19 compute-2 sudo[182760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:19 compute-2 python3.9[182762]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:19 compute-2 sudo[182760]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:20 compute-2 ceph-mon[77081]: pgmap v657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:20 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 809 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:20 compute-2 sudo[182913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzbpvmtklkrrsgfsadmapfikkkxfriu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089820.1129556-1495-251190852300235/AnsiballZ_file.py'
Jan 22 13:50:20 compute-2 sudo[182913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:20 compute-2 python3.9[182915]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:20 compute-2 sudo[182913]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:20.705+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:20 compute-2 sudo[183065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaietlyvbcooimbesiorqfcomdmidsia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089820.7125168-1495-123420346377507/AnsiballZ_file.py'
Jan 22 13:50:20 compute-2 sudo[183065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:21 compute-2 ceph-mon[77081]: pgmap v658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:21 compute-2 python3.9[183067]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:21 compute-2 sudo[183065]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:21.744+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:21 compute-2 sudo[183217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzydumpzcryeiiuhbvlucicdtvxuzbkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089821.512343-1495-125849502153447/AnsiballZ_file.py'
Jan 22 13:50:21 compute-2 sudo[183217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:22 compute-2 python3.9[183219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:22 compute-2 sudo[183217]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:22 compute-2 sudo[183370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fulphchyrogmwasunustsiapijmsiqpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089822.2285955-1495-150355607869913/AnsiballZ_file.py'
Jan 22 13:50:22 compute-2 sudo[183370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:22 compute-2 python3.9[183372]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:50:22 compute-2 sudo[183370]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:22.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:23.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:23 compute-2 ceph-mon[77081]: pgmap v659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:23.808+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:23 compute-2 python3.9[183522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:50:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:24 compute-2 sudo[183673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrrqxmpnehpgiaqsldlmoulpyrvximgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089824.161673-1649-137362732130020/AnsiballZ_stat.py'
Jan 22 13:50:24 compute-2 sudo[183673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:24 compute-2 python3.9[183675]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:24 compute-2 sudo[183673]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:24.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:25.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:25 compute-2 sudo[183798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylhxnfjhegexyngrzsoagolsssbiccqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089824.161673-1649-137362732130020/AnsiballZ_copy.py'
Jan 22 13:50:25 compute-2 sudo[183798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:25 compute-2 python3.9[183800]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089824.161673-1649-137362732130020/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:25 compute-2 sudo[183798]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:25.823+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:26 compute-2 sudo[183950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpsxislfcqpjkskjhsopwwbjstqjcfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089825.7541497-1649-199381904115958/AnsiballZ_stat.py'
Jan 22 13:50:26 compute-2 sudo[183950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:26 compute-2 ceph-mon[77081]: pgmap v660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:26 compute-2 python3.9[183952]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:26 compute-2 sudo[183950]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:26 compute-2 sudo[184076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asglzuahyybtnxdsmcwykkjubslulyla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089825.7541497-1649-199381904115958/AnsiballZ_copy.py'
Jan 22 13:50:26 compute-2 sudo[184076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:26 compute-2 python3.9[184078]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089825.7541497-1649-199381904115958/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:26.854+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:26 compute-2 sudo[184076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:27.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:27 compute-2 sudo[184228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fempfizqjgdgievbputaskcrpqsnmaiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089827.0180378-1649-194623592548719/AnsiballZ_stat.py'
Jan 22 13:50:27 compute-2 sudo[184228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:27 compute-2 ceph-mon[77081]: pgmap v661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:27 compute-2 python3.9[184230]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:27 compute-2 sudo[184228]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:27.859+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:27 compute-2 sudo[184353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbmwiadsqycmtixmunktxlkdibekwnby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089827.0180378-1649-194623592548719/AnsiballZ_copy.py'
Jan 22 13:50:27 compute-2 sudo[184353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:50:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:50:28 compute-2 python3.9[184355]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089827.0180378-1649-194623592548719/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:28 compute-2 sudo[184353]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:28 compute-2 sudo[184506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jogdakvodilssewtmyghtlcneursauim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089828.2613804-1649-54547709334028/AnsiballZ_stat.py'
Jan 22 13:50:28 compute-2 sudo[184506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:28 compute-2 python3.9[184508]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:28 compute-2 sudo[184506]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:28.867+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:29.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:29 compute-2 sudo[184631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edbngngbwslfuegvpqexyazqrcvnkysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089828.2613804-1649-54547709334028/AnsiballZ_copy.py'
Jan 22 13:50:29 compute-2 sudo[184631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:29 compute-2 python3.9[184633]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089828.2613804-1649-54547709334028/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:29 compute-2 sudo[184631]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:29.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:30 compute-2 sudo[184783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aallbnezremdarmmvaigbmoiysaoujfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089829.6403964-1649-17867872157733/AnsiballZ_stat.py'
Jan 22 13:50:30 compute-2 sudo[184783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:30 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 814 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:30 compute-2 python3.9[184785]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:30 compute-2 sudo[184783]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:30 compute-2 sudo[184909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrlmtddmgjworbsifmolixsalqlwtdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089829.6403964-1649-17867872157733/AnsiballZ_copy.py'
Jan 22 13:50:30 compute-2 sudo[184909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:30.882+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:31.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:31.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:31 compute-2 ceph-mon[77081]: pgmap v662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:31 compute-2 ceph-mon[77081]: pgmap v663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:32 compute-2 python3.9[184911]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089829.6403964-1649-17867872157733/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:32 compute-2 sudo[184909]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:32 compute-2 sudo[185062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctkwaggziisvtvihgzcaybuqjymocsdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089832.199473-1649-215269016055670/AnsiballZ_stat.py'
Jan 22 13:50:32 compute-2 sudo[185062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:32 compute-2 python3.9[185064]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:32 compute-2 sudo[185062]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:32.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-2 sudo[185187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfqngdqtsynxpdjvawonmmzsbgopitrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089832.199473-1649-215269016055670/AnsiballZ_copy.py'
Jan 22 13:50:33 compute-2 sudo[185187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-2 ceph-mon[77081]: pgmap v664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:33.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:33 compute-2 python3.9[185189]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089832.199473-1649-215269016055670/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:33 compute-2 sudo[185187]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:33 compute-2 podman[185190]: 2026-01-22 13:50:33.333045307 +0000 UTC m=+0.075481532 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 13:50:33 compute-2 sudo[185363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdejylggrlezqhimhbylxzgaoncolop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089833.413527-1649-76366723872295/AnsiballZ_stat.py'
Jan 22 13:50:33 compute-2 sudo[185363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:33.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:33 compute-2 python3.9[185365]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:33 compute-2 sudo[185363]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:34 compute-2 sudo[185486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bisgmasyrnlpgzgyhcqwqnvzqrcnfzdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089833.413527-1649-76366723872295/AnsiballZ_copy.py'
Jan 22 13:50:34 compute-2 sudo[185486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:34 compute-2 python3.9[185488]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089833.413527-1649-76366723872295/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:34 compute-2 sudo[185486]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:34 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 819 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:34 compute-2 sudo[185639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llzpjbrfwxnidmkgsnapwsixowiwmzuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089834.5563552-1649-38729854888181/AnsiballZ_stat.py'
Jan 22 13:50:34 compute-2 sudo[185639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:34.877+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:35 compute-2 python3.9[185641]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:35 compute-2 sudo[185639]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:35.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:35 compute-2 sudo[185764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djuwngceevkybyvgwspzubgfgxvhrepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089834.5563552-1649-38729854888181/AnsiballZ_copy.py'
Jan 22 13:50:35 compute-2 sudo[185764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:35 compute-2 python3.9[185766]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089834.5563552-1649-38729854888181/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:35 compute-2 sudo[185764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:35 compute-2 ceph-mon[77081]: pgmap v665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:35.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:36.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:36 compute-2 ceph-mon[77081]: pgmap v666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:36.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:37 compute-2 sudo[185917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sarzowdpnfucdaiaeprdfllxvyutyfph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089837.0951393-1988-231211090628723/AnsiballZ_command.py'
Jan 22 13:50:37 compute-2 sudo[185917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:37 compute-2 sudo[185920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:37 compute-2 sudo[185920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-2 sudo[185920]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-2 sudo[185945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:37 compute-2 sudo[185945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-2 sudo[185945]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-2 python3.9[185919]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 22 13:50:37 compute-2 sudo[185970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:37 compute-2 sudo[185970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-2 sudo[185970]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-2 sudo[185917]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-2 sudo[185996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:50:37 compute-2 sudo[185996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-2 sudo[185996]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-2 sudo[186033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:37 compute-2 sudo[186033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-2 sudo[186033]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:37 compute-2 sudo[186070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:50:37 compute-2 sudo[186070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:37.794+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:38 compute-2 sudo[186070]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:38 compute-2 sudo[186252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjybcwbsztkwqxfyllalaqgncczkiauc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089838.4946685-2014-112640442078470/AnsiballZ_file.py'
Jan 22 13:50:38 compute-2 sudo[186252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:38.799+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:38 compute-2 python3.9[186254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:38 compute-2 sudo[186252]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:39.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:39 compute-2 sudo[186404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeyraehpkygwctlykvvquyylqkenaqij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089839.1503606-2014-75327160912957/AnsiballZ_file.py'
Jan 22 13:50:39 compute-2 sudo[186404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:39 compute-2 python3.9[186406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:39 compute-2 sudo[186404]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:39 compute-2 ceph-mon[77081]: pgmap v667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:39 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 824 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:50:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:50:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:39.821+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:40 compute-2 sudo[186556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcxxldjradigjaslcnrklytzdpjkgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089839.7569811-2014-255912313426423/AnsiballZ_file.py'
Jan 22 13:50:40 compute-2 sudo[186556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:40.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:40 compute-2 python3.9[186558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:40 compute-2 sudo[186556]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:50:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:50:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:50:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:50:40 compute-2 sudo[186709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncninflevfzkosfrogzyibeizpflena ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089840.4726627-2014-54069774000057/AnsiballZ_file.py'
Jan 22 13:50:40 compute-2 sudo[186709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:40.852+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:40 compute-2 python3.9[186711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:41 compute-2 sudo[186709]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:41.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:41 compute-2 sudo[186861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irplqhkjcdnvddxqtirgkqpmnjgxifiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089841.1376243-2014-274401264860411/AnsiballZ_file.py'
Jan 22 13:50:41 compute-2 sudo[186861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:41 compute-2 python3.9[186863]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:41 compute-2 sudo[186861]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:41.819+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:42 compute-2 ceph-mon[77081]: pgmap v668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:42 compute-2 sudo[187013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocaqmyetsuspxfqzlcadwjfnljxzklwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089841.888328-2014-223121537271558/AnsiballZ_file.py'
Jan 22 13:50:42 compute-2 sudo[187013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:42 compute-2 python3.9[187015]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:42 compute-2 sudo[187013]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:42.785+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:42 compute-2 sudo[187166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xucnpeanmmifjzrkbhhbrozqrspwtrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089842.536836-2014-160024466950827/AnsiballZ_file.py'
Jan 22 13:50:42 compute-2 sudo[187166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:43 compute-2 python3.9[187168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:43 compute-2 sudo[187166]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:43 compute-2 ceph-mon[77081]: pgmap v669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:43 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 834 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:43 compute-2 sudo[187318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lksxhejkoutrycthuhqatmhukhzafpnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089843.219267-2014-226359602083713/AnsiballZ_file.py'
Jan 22 13:50:43 compute-2 sudo[187318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:43 compute-2 python3.9[187320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:43 compute-2 sudo[187318]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:43.829+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:44 compute-2 sudo[187470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bttcwxnheyypejvynjrccrpmldxzzdhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089843.870653-2014-45120213915116/AnsiballZ_file.py'
Jan 22 13:50:44 compute-2 sudo[187470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:44 compute-2 python3.9[187472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:44 compute-2 sudo[187470]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:44 compute-2 sudo[187623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevasakqervawgkiubjtwykijsjugdky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089844.5398428-2014-194744227433548/AnsiballZ_file.py'
Jan 22 13:50:44 compute-2 sudo[187623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:44.827+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:44 compute-2 python3.9[187625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:44 compute-2 sudo[187623]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:45.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:45 compute-2 ceph-mon[77081]: pgmap v670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:45 compute-2 sudo[187775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgljqacejnaclrozfoiojclhusycgppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089845.1263564-2014-144900423430292/AnsiballZ_file.py'
Jan 22 13:50:45 compute-2 sudo[187775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:45 compute-2 python3.9[187777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:45 compute-2 sudo[187775]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:45.805+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:46 compute-2 sudo[187927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqeetxyqgieatdgdpcnbewldfbqjinaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089845.7429297-2014-242147802875459/AnsiballZ_file.py'
Jan 22 13:50:46 compute-2 sudo[187927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:46 compute-2 python3.9[187929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:46 compute-2 sudo[187927]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:46 compute-2 sudo[188080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmxoqkvwwhruxmzcfmuxadjgettakxmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089846.356221-2014-93527718706658/AnsiballZ_file.py'
Jan 22 13:50:46 compute-2 sudo[188080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:46 compute-2 python3.9[188082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:46 compute-2 sudo[188080]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:46.815+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:50:47.156 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:50:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:50:47.156 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:50:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:50:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:50:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:47.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:47 compute-2 sudo[188232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksptsvjhxphwxmggpczvgxdubhdvmjal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089846.9484055-2014-145418076594172/AnsiballZ_file.py'
Jan 22 13:50:47 compute-2 sudo[188232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:47 compute-2 python3.9[188234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:47 compute-2 sudo[188232]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:47.784+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:48.780+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:49 compute-2 sudo[188385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmqayfppaiydcgpwrqqxghsxmututzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089848.6966958-2311-144238644784835/AnsiballZ_stat.py'
Jan 22 13:50:49 compute-2 sudo[188385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:49 compute-2 python3.9[188387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:49 compute-2 sudo[188385]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:49.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:49 compute-2 podman[188482]: 2026-01-22 13:50:49.59918458 +0000 UTC m=+0.050221199 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 13:50:49 compute-2 sudo[188523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdstqdevldoihhivvgjjwmbqoxzzcept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089848.6966958-2311-144238644784835/AnsiballZ_copy.py'
Jan 22 13:50:49 compute-2 sudo[188523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:49.741+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:49 compute-2 python3.9[188527]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089848.6966958-2311-144238644784835/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:49 compute-2 sudo[188523]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:50 compute-2 sudo[188677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mliwixpckcdtebupaousnhckqstzeumy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089849.9526317-2311-27249398415297/AnsiballZ_stat.py'
Jan 22 13:50:50 compute-2 sudo[188677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:50 compute-2 python3.9[188679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:50 compute-2 sudo[188677]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:50 compute-2 sudo[188801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrocghlfywfszucsgkuluolfpidykoqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089849.9526317-2311-27249398415297/AnsiballZ_copy.py'
Jan 22 13:50:50 compute-2 sudo[188801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:50.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:50 compute-2 python3.9[188803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089849.9526317-2311-27249398415297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:50 compute-2 sudo[188801]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:51 compute-2 sudo[188953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjvevrpbsgqfzhchkgapiomsaplimneo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089851.1138203-2311-254952131454887/AnsiballZ_stat.py'
Jan 22 13:50:51 compute-2 sudo[188953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:51 compute-2 python3.9[188955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:51 compute-2 sudo[188953]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:51.720+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:52 compute-2 sudo[189076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdsdifwkjpaztcsbnbtgsydjtlzkmnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089851.1138203-2311-254952131454887/AnsiballZ_copy.py'
Jan 22 13:50:52 compute-2 sudo[189076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:52 compute-2 python3.9[189078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089851.1138203-2311-254952131454887/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:52 compute-2 sudo[189076]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:52 compute-2 sudo[189229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbaihxrjykgpcnmevxyrecxtnmxzagl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089852.37494-2311-96388371195678/AnsiballZ_stat.py'
Jan 22 13:50:52 compute-2 sudo[189229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:52.707+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:52 compute-2 python3.9[189231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:52 compute-2 sudo[189229]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-2 sudo[189236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:52 compute-2 sudo[189236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:52 compute-2 sudo[189236]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:52 compute-2 ceph-mon[77081]: pgmap v671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:52 compute-2 ceph-mon[77081]: pgmap v672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:50:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:52 compute-2 ceph-mon[77081]: pgmap v673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:52 compute-2 sudo[189286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:50:52 compute-2 sudo[189286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:52 compute-2 sudo[189286]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:53 compute-2 sudo[189402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiveziubbkoiaufmmthcstkdtpqkiaah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089852.37494-2311-96388371195678/AnsiballZ_copy.py'
Jan 22 13:50:53 compute-2 sudo[189402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:53 compute-2 python3.9[189404]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089852.37494-2311-96388371195678/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:53 compute-2 sudo[189402]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:53.730+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-2 sudo[189554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiwykboglcnhlhccqtrgpyfdlatifksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089853.5406158-2311-9965367156643/AnsiballZ_stat.py'
Jan 22 13:50:53 compute-2 sudo[189554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:53 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 839 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:53 compute-2 ceph-mon[77081]: pgmap v674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:50:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:54 compute-2 python3.9[189556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:54 compute-2 sudo[189554]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:54 compute-2 sudo[189678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goujuwcprwwrwdskczvdidonfbugazyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089853.5406158-2311-9965367156643/AnsiballZ_copy.py'
Jan 22 13:50:54 compute-2 sudo[189678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:54 compute-2 python3.9[189680]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089853.5406158-2311-9965367156643/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:54 compute-2 sudo[189678]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:54.741+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:55 compute-2 sudo[189830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shmvqfcgrzakjyathgycrzyysoiyjkqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089854.7506-2311-115787325249875/AnsiballZ_stat.py'
Jan 22 13:50:55 compute-2 sudo[189830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:55 compute-2 python3.9[189832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:55 compute-2 sudo[189830]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:55 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:55 compute-2 ceph-mon[77081]: pgmap v675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:55 compute-2 sudo[189953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbdvlcqmmynjxbdluuaaxyrzscideteg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089854.7506-2311-115787325249875/AnsiballZ_copy.py'
Jan 22 13:50:55 compute-2 sudo[189953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:55.745+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:55 compute-2 python3.9[189955]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089854.7506-2311-115787325249875/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:55 compute-2 sudo[189953]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:56 compute-2 sudo[190105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jculzoealtdaznonibjtgtorwtrguvxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089855.9375217-2311-146330929579461/AnsiballZ_stat.py'
Jan 22 13:50:56 compute-2 sudo[190105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:56 compute-2 python3.9[190107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:56 compute-2 sudo[190105]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:56 compute-2 sudo[190229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjapluyaqjtjmnlvzblgbandltcwhbtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089855.9375217-2311-146330929579461/AnsiballZ_copy.py'
Jan 22 13:50:56 compute-2 sudo[190229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:56.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:50:56 compute-2 python3.9[190231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089855.9375217-2311-146330929579461/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:56 compute-2 sudo[190229]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:50:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:57.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:50:57 compute-2 sudo[190381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rigssermpeowohvxzcntvecajujdonrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089857.0524416-2311-90713588702022/AnsiballZ_stat.py'
Jan 22 13:50:57 compute-2 sudo[190381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:57 compute-2 python3.9[190383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:57 compute-2 sudo[190381]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:57 compute-2 sudo[190384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:57 compute-2 sudo[190384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:57 compute-2 sudo[190384]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:57 compute-2 sudo[190417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:50:57 compute-2 sudo[190417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:50:57 compute-2 sudo[190417]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:57.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:57 compute-2 sudo[190554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbpydovrhrjczwinslxzkpoktigogba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089857.0524416-2311-90713588702022/AnsiballZ_copy.py'
Jan 22 13:50:57 compute-2 sudo[190554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:57 compute-2 ceph-mon[77081]: pgmap v676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:57 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:57 compute-2 python3.9[190556]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089857.0524416-2311-90713588702022/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:57 compute-2 sudo[190554]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.107333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858107428, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1751, "num_deletes": 252, "total_data_size": 3615608, "memory_usage": 3669544, "flush_reason": "Manual Compaction"}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858119576, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1454881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15759, "largest_seqno": 17505, "table_properties": {"data_size": 1449261, "index_size": 2567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17004, "raw_average_key_size": 21, "raw_value_size": 1435883, "raw_average_value_size": 1831, "num_data_blocks": 112, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089723, "oldest_key_time": 1769089723, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12264 microseconds, and 6068 cpu microseconds.
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119619) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1454881 bytes OK
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119638) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.120891) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.120908) EVENT_LOG_v1 {"time_micros": 1769089858120903, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.120927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3607299, prev total WAL file size 3607299, number of live WAL files 2.
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1420KB)], [27(9797KB)]
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858121872, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 11487182, "oldest_snapshot_seqno": -1}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5351 keys, 8490136 bytes, temperature: kUnknown
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858188084, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8490136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8455584, "index_size": 20042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133878, "raw_average_key_size": 25, "raw_value_size": 8359733, "raw_average_value_size": 1562, "num_data_blocks": 828, "num_entries": 5351, "num_filter_entries": 5351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.188409) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8490136 bytes
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.190239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.3 rd, 128.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.7) write-amplify(5.8) OK, records in: 5809, records dropped: 458 output_compression: NoCompression
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.190272) EVENT_LOG_v1 {"time_micros": 1769089858190258, "job": 14, "event": "compaction_finished", "compaction_time_micros": 66295, "compaction_time_cpu_micros": 18484, "output_level": 6, "num_output_files": 1, "total_output_size": 8490136, "num_input_records": 5809, "num_output_records": 5351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858190934, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858194192, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:50:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:58 compute-2 sudo[190707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fikjoqcgumbxlaqfcknmpxpcmkahawgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089858.1223714-2311-187916882300631/AnsiballZ_stat.py'
Jan 22 13:50:58 compute-2 sudo[190707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:58 compute-2 python3.9[190709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:50:58 compute-2 sudo[190707]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:58.720+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:50:59 compute-2 ceph-mon[77081]: pgmap v677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:50:59 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 844 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:50:59 compute-2 sudo[190830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldybqbqljjhlkbdeucdtpzsysahxrjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089858.1223714-2311-187916882300631/AnsiballZ_copy.py'
Jan 22 13:50:59 compute-2 sudo[190830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:50:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:50:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:50:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:59.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:50:59 compute-2 python3.9[190832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089858.1223714-2311-187916882300631/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:50:59 compute-2 sudo[190830]: pam_unix(sudo:session): session closed for user root
Jan 22 13:50:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:59.698+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:50:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:00 compute-2 sudo[190982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbxvgzoyfpatpcmtsrvocemshgpzmfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089859.8205059-2311-49699390424987/AnsiballZ_stat.py'
Jan 22 13:51:00 compute-2 sudo[190982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:00 compute-2 python3.9[190984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:00 compute-2 sudo[190982]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:00 compute-2 sudo[191106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtuvrnlanzwpphhymiqzgbhhtpxpcunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089859.8205059-2311-49699390424987/AnsiballZ_copy.py'
Jan 22 13:51:00 compute-2 sudo[191106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:00.669+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:00 compute-2 python3.9[191108]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089859.8205059-2311-49699390424987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:00 compute-2 sudo[191106]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:01 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:01 compute-2 ceph-mon[77081]: pgmap v678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:01 compute-2 sudo[191258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdkrkguoftgmwbckkbnemdxmznjyognl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089861.0160618-2311-224093039950460/AnsiballZ_stat.py'
Jan 22 13:51:01 compute-2 sudo[191258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:01 compute-2 python3.9[191260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:01 compute-2 sudo[191258]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:01.646+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:01 compute-2 sudo[191381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrkvxxlozkmelbgxqahpfuvivsdwlnpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089861.0160618-2311-224093039950460/AnsiballZ_copy.py'
Jan 22 13:51:01 compute-2 sudo[191381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:02 compute-2 python3.9[191383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089861.0160618-2311-224093039950460/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:02 compute-2 sudo[191381]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:51:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:02.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:51:02 compute-2 sudo[191534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geyqgvlyuecemlcjehjyxumjwrhomffp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089862.272363-2311-197557444458320/AnsiballZ_stat.py'
Jan 22 13:51:02 compute-2 sudo[191534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:02.660+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:02 compute-2 python3.9[191536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:02 compute-2 sudo[191534]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:03 compute-2 sudo[191657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntvhzwuijpztylmjovoumlionlkbdncc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089862.272363-2311-197557444458320/AnsiballZ_copy.py'
Jan 22 13:51:03 compute-2 sudo[191657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:03 compute-2 ceph-mon[77081]: pgmap v679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:03 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 854 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:03 compute-2 python3.9[191659]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089862.272363-2311-197557444458320/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:03 compute-2 sudo[191657]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:03.634+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:03 compute-2 sudo[191825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljknkvghrwtqftrgcnepuulmpdfroshl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089863.5299273-2311-267890357109361/AnsiballZ_stat.py'
Jan 22 13:51:03 compute-2 sudo[191825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:03 compute-2 podman[191783]: 2026-01-22 13:51:03.928155012 +0000 UTC m=+0.081976973 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 13:51:04 compute-2 python3.9[191832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:04 compute-2 sudo[191825]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:04 compute-2 sudo[191960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fowwvaoennrumpspdqyqdoucflbbpfeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089863.5299273-2311-267890357109361/AnsiballZ_copy.py'
Jan 22 13:51:04 compute-2 sudo[191960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:04 compute-2 python3.9[191962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089863.5299273-2311-267890357109361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:04 compute-2 sudo[191960]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:04.622+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:04 compute-2 sudo[192112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sniytyauixffqeukmsuzwmhjcruraida ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089864.7334044-2311-247877618896651/AnsiballZ_stat.py'
Jan 22 13:51:04 compute-2 sudo[192112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:05 compute-2 python3.9[192114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:05 compute-2 sudo[192112]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:05 compute-2 ceph-mon[77081]: pgmap v680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:05 compute-2 sudo[192235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raaacpkgikvqzjxgwochtcqeugjxydvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089864.7334044-2311-247877618896651/AnsiballZ_copy.py'
Jan 22 13:51:05 compute-2 sudo[192235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:05.617+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:05 compute-2 python3.9[192237]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089864.7334044-2311-247877618896651/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:05 compute-2 sudo[192235]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:51:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:51:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:06.594+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:07.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:07.576+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:07 compute-2 ceph-mon[77081]: pgmap v681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 13:51:08 compute-2 python3.9[192388]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:08.600+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:09 compute-2 sudo[192542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieamvotouizdgklllsibcqgcjlwntlph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089868.6337137-2929-183585889574071/AnsiballZ_seboolean.py'
Jan 22 13:51:09 compute-2 sudo[192542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:09 compute-2 python3.9[192544]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 22 13:51:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:09.583+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:10 compute-2 auditd[699]: Audit daemon rotating log files
Jan 22 13:51:10 compute-2 ceph-mon[77081]: pgmap v682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:10 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 859 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:10 compute-2 sudo[192542]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:10.545+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:11 compute-2 ceph-mon[77081]: pgmap v683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:11.553+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:12.555+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:12 compute-2 sudo[192700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-layhbdvkzkskwdssioahtqcixsahrzya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089872.4458768-2953-276896121613955/AnsiballZ_copy.py'
Jan 22 13:51:12 compute-2 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 22 13:51:12 compute-2 sudo[192700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:12 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:12 compute-2 python3.9[192702]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:13 compute-2 sudo[192700]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:51:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:51:13 compute-2 sudo[192852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhkozbyhdnibmagvckfsppluluxxbmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089873.1942587-2953-89391324199154/AnsiballZ_copy.py'
Jan 22 13:51:13 compute-2 sudo[192852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:13.549+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:13 compute-2 python3.9[192854]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:13 compute-2 sudo[192852]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:13 compute-2 ceph-mon[77081]: pgmap v684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:14 compute-2 sudo[193004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxvsntrymdaduyqaitzainqptttkjcbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089873.8944864-2953-278756880143694/AnsiballZ_copy.py'
Jan 22 13:51:14 compute-2 sudo[193004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:14 compute-2 python3.9[193006]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:14 compute-2 sudo[193004]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:14.513+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:14 compute-2 sudo[193157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpbriqjtkobtwkmmxnnkgcwmfkyyldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089874.5798986-2953-50926249599243/AnsiballZ_copy.py'
Jan 22 13:51:14 compute-2 sudo[193157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:15 compute-2 python3.9[193159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:15 compute-2 sudo[193157]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:15.468+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:15 compute-2 sudo[193311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssopztjqrsxjotgrgtibcfhsvjzsqlwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089875.224999-2953-26794607992197/AnsiballZ_copy.py'
Jan 22 13:51:15 compute-2 sudo[193311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:15 compute-2 sshd-session[193160]: Invalid user validator from 92.118.39.95 port 38700
Jan 22 13:51:15 compute-2 sshd-session[193160]: Connection closed by invalid user validator 92.118.39.95 port 38700 [preauth]
Jan 22 13:51:15 compute-2 python3.9[193313]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:15 compute-2 sudo[193311]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:16 compute-2 ceph-mon[77081]: pgmap v685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:16.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:16.486+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:16 compute-2 sudo[193464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxuatnpdzlyzegkkgawfxhlcekefytef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089876.4823515-3061-41314013848100/AnsiballZ_copy.py'
Jan 22 13:51:16 compute-2 sudo[193464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:16 compute-2 python3.9[193466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:17 compute-2 sudo[193464]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:17 compute-2 ceph-mon[77081]: pgmap v686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:17 compute-2 sudo[193616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istdvpqclkmrsmgmwulvddvtxgaybpih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089877.1392-3061-113120294112816/AnsiballZ_copy.py'
Jan 22 13:51:17 compute-2 sudo[193616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:17.483+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:17 compute-2 sudo[193619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:17 compute-2 sudo[193619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:17 compute-2 sudo[193619]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:17 compute-2 sudo[193644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:17 compute-2 sudo[193644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:17 compute-2 sudo[193644]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:17 compute-2 python3.9[193618]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:17 compute-2 sudo[193616]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:18.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:18.473+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:18 compute-2 sudo[193819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uokgjkrwvejjdyzgcxlsdkxvgovckvcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089878.2791996-3061-83023107321409/AnsiballZ_copy.py'
Jan 22 13:51:18 compute-2 sudo[193819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:18 compute-2 ceph-mon[77081]: pgmap v687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 864 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:18 compute-2 python3.9[193821]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:18 compute-2 sudo[193819]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:19 compute-2 sudo[193971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regdayvalnpjqmzlppdzqlhucubkogdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089878.9626174-3061-124236746488342/AnsiballZ_copy.py'
Jan 22 13:51:19 compute-2 sudo[193971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:19.424+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:19 compute-2 python3.9[193973]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:19 compute-2 sudo[193971]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:19 compute-2 podman[194097]: 2026-01-22 13:51:19.943774445 +0000 UTC m=+0.066493874 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 13:51:19 compute-2 sudo[194140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxuowqihqyzspguidelhidxdhycjxjct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089879.6128426-3061-59323765839419/AnsiballZ_copy.py'
Jan 22 13:51:19 compute-2 sudo[194140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:20 compute-2 python3.9[194144]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:20 compute-2 sudo[194140]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:20.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:20.458+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:20 compute-2 sudo[194295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptwlfaacpudeowtfgnvsjegluwiange ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089880.4383583-3169-236530533181219/AnsiballZ_systemd.py'
Jan 22 13:51:20 compute-2 sudo[194295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:20 compute-2 ceph-mon[77081]: pgmap v688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:20 compute-2 python3.9[194297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:21 compute-2 systemd[1]: Reloading.
Jan 22 13:51:21 compute-2 systemd-rc-local-generator[194321]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:21 compute-2 systemd-sysv-generator[194326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:21.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:21 compute-2 systemd[1]: Starting libvirt logging daemon socket...
Jan 22 13:51:21 compute-2 systemd[1]: Listening on libvirt logging daemon socket.
Jan 22 13:51:21 compute-2 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 22 13:51:21 compute-2 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 22 13:51:21 compute-2 systemd[1]: Starting libvirt logging daemon...
Jan 22 13:51:21 compute-2 systemd[1]: Started libvirt logging daemon.
Jan 22 13:51:21 compute-2 sudo[194295]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:21.501+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:21 compute-2 sudo[194488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulgzdslatbemfndzucqspznnrvgoqali ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089881.6271899-3169-32775520366956/AnsiballZ_systemd.py'
Jan 22 13:51:21 compute-2 sudo[194488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:22 compute-2 python3.9[194490]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:22 compute-2 systemd[1]: Reloading.
Jan 22 13:51:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:22.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:22 compute-2 systemd-rc-local-generator[194520]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:22 compute-2 systemd-sysv-generator[194524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:22.545+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:22 compute-2 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 22 13:51:22 compute-2 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 22 13:51:22 compute-2 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 22 13:51:22 compute-2 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 22 13:51:22 compute-2 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 22 13:51:22 compute-2 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 22 13:51:22 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 13:51:22 compute-2 systemd[1]: Started libvirt nodedev daemon.
Jan 22 13:51:22 compute-2 sudo[194488]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:23 compute-2 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 22 13:51:23 compute-2 sudo[194706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykqnbugvmcuehwgmligofabzpzaxfsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089882.8247242-3169-175295798932310/AnsiballZ_systemd.py'
Jan 22 13:51:23 compute-2 sudo[194706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:23.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:23 compute-2 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 22 13:51:23 compute-2 ceph-mon[77081]: pgmap v689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:23 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:23 compute-2 python3.9[194708]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:23 compute-2 systemd[1]: Reloading.
Jan 22 13:51:23 compute-2 systemd-rc-local-generator[194737]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:23 compute-2 systemd-sysv-generator[194740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:23.573+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:23 compute-2 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 22 13:51:23 compute-2 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 22 13:51:23 compute-2 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 22 13:51:23 compute-2 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 22 13:51:23 compute-2 systemd[1]: Starting libvirt proxy daemon...
Jan 22 13:51:23 compute-2 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 22 13:51:23 compute-2 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 22 13:51:23 compute-2 systemd[1]: Started libvirt proxy daemon.
Jan 22 13:51:23 compute-2 sudo[194706]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:24 compute-2 sudo[194926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdfrgppmivtwakxylrjwkvyzlvsltppa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089884.003341-3169-103162050522838/AnsiballZ_systemd.py'
Jan 22 13:51:24 compute-2 sudo[194926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:24.571+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:24 compute-2 python3.9[194929]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:24 compute-2 systemd[1]: Reloading.
Jan 22 13:51:24 compute-2 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d18beed-d68d-4a81-b559-48d1464af1ec
Jan 22 13:51:24 compute-2 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 22 13:51:24 compute-2 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d18beed-d68d-4a81-b559-48d1464af1ec
Jan 22 13:51:24 compute-2 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 22 13:51:24 compute-2 systemd-rc-local-generator[194959]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:24 compute-2 systemd-sysv-generator[194962]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:24 compute-2 systemd[1]: Listening on libvirt locking daemon socket.
Jan 22 13:51:24 compute-2 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 22 13:51:24 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 22 13:51:25 compute-2 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 22 13:51:25 compute-2 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 22 13:51:25 compute-2 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 22 13:51:25 compute-2 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 22 13:51:25 compute-2 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 22 13:51:25 compute-2 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 22 13:51:25 compute-2 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 22 13:51:25 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 13:51:25 compute-2 systemd[1]: Started libvirt QEMU daemon.
Jan 22 13:51:25 compute-2 sudo[194926]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:25 compute-2 sudo[195143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onxkcwlkcfavvtupfbukaxjodhrfvkfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089885.2284992-3169-27221837373882/AnsiballZ_systemd.py'
Jan 22 13:51:25 compute-2 sudo[195143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:25.560+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:25 compute-2 ceph-mon[77081]: pgmap v690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:25 compute-2 python3.9[195145]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:51:25 compute-2 systemd[1]: Reloading.
Jan 22 13:51:25 compute-2 systemd-sysv-generator[195177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:51:25 compute-2 systemd-rc-local-generator[195171]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:51:26 compute-2 systemd[1]: Starting libvirt secret daemon socket...
Jan 22 13:51:26 compute-2 systemd[1]: Listening on libvirt secret daemon socket.
Jan 22 13:51:26 compute-2 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 22 13:51:26 compute-2 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 22 13:51:26 compute-2 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 22 13:51:26 compute-2 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 22 13:51:26 compute-2 systemd[1]: Starting libvirt secret daemon...
Jan 22 13:51:26 compute-2 systemd[1]: Started libvirt secret daemon.
Jan 22 13:51:26 compute-2 sudo[195143]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:26.523+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:26 compute-2 ceph-mon[77081]: pgmap v691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:27 compute-2 sudo[195356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohwfmyofajytiqbiqdbrqwjbxfhvwpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089886.8924315-3281-48140692175911/AnsiballZ_file.py'
Jan 22 13:51:27 compute-2 sudo[195356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:27 compute-2 python3.9[195358]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:27 compute-2 sudo[195356]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:27.476+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:27 compute-2 sudo[195508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjetuoqlsppzrelnavclqgxmpatxppap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089887.6602623-3305-148878733325251/AnsiballZ_find.py'
Jan 22 13:51:27 compute-2 sudo[195508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:28 compute-2 python3.9[195510]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:51:28 compute-2 sudo[195508]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:28.484+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:28 compute-2 sudo[195661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jidpeywvudwtartrdzcwuudqoeczdrcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089888.4332354-3328-79218330726958/AnsiballZ_command.py'
Jan 22 13:51:28 compute-2 sudo[195661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:28 compute-2 python3.9[195663]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:28 compute-2 sudo[195661]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:29.525+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:30 compute-2 python3.9[195817]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:51:30 compute-2 ceph-mon[77081]: pgmap v692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:30.552+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:31 compute-2 python3.9[195968]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-2 ceph-mon[77081]: pgmap v693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.505690) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891505779, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 645, "num_deletes": 251, "total_data_size": 922669, "memory_usage": 935736, "flush_reason": "Manual Compaction"}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891513087, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 605960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17510, "largest_seqno": 18150, "table_properties": {"data_size": 602925, "index_size": 943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7970, "raw_average_key_size": 19, "raw_value_size": 596493, "raw_average_value_size": 1469, "num_data_blocks": 42, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089858, "oldest_key_time": 1769089858, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7430 microseconds, and 3540 cpu microseconds.
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.513134) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 605960 bytes OK
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.513153) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.514557) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.514576) EVENT_LOG_v1 {"time_micros": 1769089891514570, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.514596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 919043, prev total WAL file size 919043, number of live WAL files 2.
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.515274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(591KB)], [30(8291KB)]
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891515364, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9096096, "oldest_snapshot_seqno": -1}
Jan 22 13:51:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:31.561+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 5246 keys, 7419107 bytes, temperature: kUnknown
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891568167, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 7419107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7386143, "index_size": 18774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132570, "raw_average_key_size": 25, "raw_value_size": 7292773, "raw_average_value_size": 1390, "num_data_blocks": 771, "num_entries": 5246, "num_filter_entries": 5246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.568526) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7419107 bytes
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.570117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 140.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.2) OK, records in: 5757, records dropped: 511 output_compression: NoCompression
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.570149) EVENT_LOG_v1 {"time_micros": 1769089891570131, "job": 16, "event": "compaction_finished", "compaction_time_micros": 52948, "compaction_time_cpu_micros": 22730, "output_level": 6, "num_output_files": 1, "total_output_size": 7419107, "num_input_records": 5757, "num_output_records": 5246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891570588, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891572042, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.515182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:51:31 compute-2 python3.9[196089]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089890.7869368-3386-267387329374569/.source.xml follow=False _original_basename=secret.xml.j2 checksum=661e779e9ad9ab9796e6f7af12c5e6a2862cccb5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:32 compute-2 sudo[196239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahlhndiujedghfgqxetxpfecmovfmas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089892.0625885-3431-49067098507490/AnsiballZ_command.py'
Jan 22 13:51:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:32 compute-2 sudo[196239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:32 compute-2 python3.9[196242]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 088fe176-0106-5401-803c-2da38b73b76a
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:32 compute-2 polkitd[43481]: Registered Authentication Agent for unix-process:196244:374326 (system bus name :1.1902 [pkttyagent --process 196244 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 22 13:51:32 compute-2 polkitd[43481]: Unregistered Authentication Agent for unix-process:196244:374326 (system bus name :1.1902, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 22 13:51:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:32.543+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:32 compute-2 polkitd[43481]: Registered Authentication Agent for unix-process:196243:374326 (system bus name :1.1903 [pkttyagent --process 196243 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 22 13:51:32 compute-2 polkitd[43481]: Unregistered Authentication Agent for unix-process:196243:374326 (system bus name :1.1903, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 22 13:51:32 compute-2 sudo[196239]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:33.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:33 compute-2 python3.9[196404]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:33.522+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:33 compute-2 sudo[196554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzjeyfyfghjkdslyfpylvmfbiqybndda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089893.6331842-3478-116340400249000/AnsiballZ_command.py'
Jan 22 13:51:33 compute-2 sudo[196554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:33 compute-2 ceph-mon[77081]: pgmap v694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 884 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:34 compute-2 sudo[196554]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:34.565+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:34 compute-2 sudo[196721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unpnkoqluuqpeblndrxhahatlkdusmzv ; FSID=088fe176-0106-5401-803c-2da38b73b76a KEY=AQCZJnJpAAAAABAAqtkA7doM+5EIMhShr22e9w== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089894.3601785-3502-95558996136250/AnsiballZ_command.py'
Jan 22 13:51:34 compute-2 sudo[196721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:34 compute-2 podman[196682]: 2026-01-22 13:51:34.712528633 +0000 UTC m=+0.094165231 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 13:51:34 compute-2 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 22 13:51:34 compute-2 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 22 13:51:34 compute-2 polkitd[43481]: Registered Authentication Agent for unix-process:196738:374563 (system bus name :1.1906 [pkttyagent --process 196738 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 22 13:51:34 compute-2 polkitd[43481]: Unregistered Authentication Agent for unix-process:196738:374563 (system bus name :1.1906, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 22 13:51:34 compute-2 sudo[196721]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:35.551+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:35 compute-2 sudo[196893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxwydfayqtxupaukmjgevnftikocgzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089895.4291258-3526-108367915072049/AnsiballZ_copy.py'
Jan 22 13:51:35 compute-2 sudo[196893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:35 compute-2 ceph-mon[77081]: pgmap v695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:35 compute-2 python3.9[196895]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:35 compute-2 sudo[196893]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:36 compute-2 sudo[197046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyxgbppflvapcyzimhgkiwlicuzdqomz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089896.1838663-3551-224981364821614/AnsiballZ_stat.py'
Jan 22 13:51:36 compute-2 sudo[197046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:36.597+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:36 compute-2 python3.9[197048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:36 compute-2 sudo[197046]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:36 compute-2 ceph-mon[77081]: pgmap v696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:37 compute-2 sudo[197169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxgorasitknqmgamiciunsipybwmuffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089896.1838663-3551-224981364821614/AnsiballZ_copy.py'
Jan 22 13:51:37 compute-2 sudo[197169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:37.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:37 compute-2 python3.9[197171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089896.1838663-3551-224981364821614/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:37 compute-2 sudo[197169]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:37.592+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:37 compute-2 sudo[197248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:37 compute-2 sudo[197248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:37 compute-2 sudo[197248]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:37 compute-2 sudo[197293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:37 compute-2 sudo[197293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:37 compute-2 sudo[197293]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:38 compute-2 sudo[197371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjgzursyfcrrsqyzrrjwcpjgfbdhfjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089897.7725086-3599-204124603448068/AnsiballZ_file.py'
Jan 22 13:51:38 compute-2 sudo[197371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:38 compute-2 python3.9[197373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:38 compute-2 sudo[197371]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:38.631+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:38 compute-2 sudo[197524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drcsdduxnqoigqkdcyqvqopujfyggjjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089898.473389-3622-203635999170998/AnsiballZ_stat.py'
Jan 22 13:51:38 compute-2 sudo[197524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:39 compute-2 python3.9[197526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:39 compute-2 sudo[197524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:51:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:39.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:51:39 compute-2 sudo[197602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjrknfncrmawmtutfsfvsosewadqwfll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089898.473389-3622-203635999170998/AnsiballZ_file.py'
Jan 22 13:51:39 compute-2 sudo[197602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:39 compute-2 python3.9[197604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:39 compute-2 sudo[197602]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:39.606+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:40 compute-2 ceph-mon[77081]: pgmap v697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:40 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 889 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:40 compute-2 sudo[197754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohmhlaqsobyffzrxfxzvdispebumlbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089899.8838623-3659-46359332706586/AnsiballZ_stat.py'
Jan 22 13:51:40 compute-2 sudo[197754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:51:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:51:40 compute-2 python3.9[197756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:40 compute-2 sudo[197754]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:40 compute-2 sudo[197833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olecuhudfeotxyrsbnpcmumirnckffju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089899.8838623-3659-46359332706586/AnsiballZ_file.py'
Jan 22 13:51:40 compute-2 sudo[197833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:40.640+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:40 compute-2 python3.9[197835]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8o62472j recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:40 compute-2 sudo[197833]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:41.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:41 compute-2 sudo[197985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilzwpxkamlcwerzykyysipjihespnzba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089901.0769997-3695-272209635544082/AnsiballZ_stat.py'
Jan 22 13:51:41 compute-2 sudo[197985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:41 compute-2 ceph-mon[77081]: pgmap v698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:41 compute-2 python3.9[197987]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:41.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:41 compute-2 sudo[197985]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:42 compute-2 sudo[198063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpuuqznybpygzsigrtmtjfiqrkbhynev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089901.0769997-3695-272209635544082/AnsiballZ_file.py'
Jan 22 13:51:42 compute-2 sudo[198063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:42 compute-2 python3.9[198065]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:42 compute-2 sudo[198063]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:42.620+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:42 compute-2 sudo[198216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myirflkiyakibpuxnfxwxizejxzgoota ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089902.5431886-3734-140647797009934/AnsiballZ_command.py'
Jan 22 13:51:42 compute-2 sudo[198216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:42 compute-2 ceph-mon[77081]: pgmap v699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:43 compute-2 python3.9[198218]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:43 compute-2 sudo[198216]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:43.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:43.595+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:43 compute-2 sudo[198369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzcmfqnqacciglolcblbziumzsuaowtp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769089903.3640363-3758-250104050031675/AnsiballZ_edpm_nftables_from_files.py'
Jan 22 13:51:43 compute-2 sudo[198369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:43 compute-2 python3[198371]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 13:51:44 compute-2 sudo[198369]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:44 compute-2 sshd-session[198373]: error: kex_exchange_identification: read: Connection reset by peer
Jan 22 13:51:44 compute-2 sshd-session[198373]: Connection reset by 176.120.22.52 port 49365
Jan 22 13:51:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:44.576+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:44 compute-2 sudo[198524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpvvjjkapodcbzrpzyaqnnibcgoadux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089904.23593-3782-97942371212249/AnsiballZ_stat.py'
Jan 22 13:51:44 compute-2 sudo[198524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:44 compute-2 python3.9[198526]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:44 compute-2 sudo[198524]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:44 compute-2 sudo[198602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahhtaftysipotznzlrzlvxfwpqpaxvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089904.23593-3782-97942371212249/AnsiballZ_file.py'
Jan 22 13:51:44 compute-2 sudo[198602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:45 compute-2 python3.9[198604]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:45 compute-2 sudo[198602]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:45.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:45.615+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:45 compute-2 sudo[198754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aipvzihlarhrjvikkvwwfokphfbngzaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089905.5175152-3818-260952607559155/AnsiballZ_stat.py'
Jan 22 13:51:45 compute-2 sudo[198754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:45 compute-2 ceph-mon[77081]: pgmap v700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:46 compute-2 python3.9[198756]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:46 compute-2 sudo[198754]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:46 compute-2 sudo[198880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asublhbrnducwljicopbbfkitambitle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089905.5175152-3818-260952607559155/AnsiballZ_copy.py'
Jan 22 13:51:46 compute-2 sudo[198880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:46 compute-2 python3.9[198882]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089905.5175152-3818-260952607559155/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:46 compute-2 sudo[198880]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:46.658+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:47 compute-2 sshd-session[198883]: Invalid user sol from 45.148.10.240 port 41328
Jan 22 13:51:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:47 compute-2 ceph-mon[77081]: pgmap v701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:47 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:51:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:51:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:51:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:51:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:51:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:51:47 compute-2 sshd-session[198883]: Connection closed by invalid user sol 45.148.10.240 port 41328 [preauth]
Jan 22 13:51:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:47.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:47 compute-2 sudo[199034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhijvlrvxuwwzpwmzevxcbmikohebgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089906.9843247-3863-86800363329343/AnsiballZ_stat.py'
Jan 22 13:51:47 compute-2 sudo[199034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:47 compute-2 python3.9[199036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:47 compute-2 sudo[199034]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:47.677+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:47 compute-2 sudo[199112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckcbxxakkeiqfnlthoxcqohxhnfybqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089906.9843247-3863-86800363329343/AnsiballZ_file.py'
Jan 22 13:51:47 compute-2 sudo[199112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:47 compute-2 python3.9[199114]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:47 compute-2 sudo[199112]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:51:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:51:48 compute-2 sudo[199265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjihtjjnjszjkzmxoqongfejryhptin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089908.2168076-3899-69542743683990/AnsiballZ_stat.py'
Jan 22 13:51:48 compute-2 sudo[199265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:48.630+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:48 compute-2 python3.9[199267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:48 compute-2 sudo[199265]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:48 compute-2 sudo[199343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqoehahdifktzatodcchpxecusqjkmpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089908.2168076-3899-69542743683990/AnsiballZ_file.py'
Jan 22 13:51:48 compute-2 sudo[199343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:49 compute-2 python3.9[199345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:49 compute-2 sudo[199343]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:49.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:49.587+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:49 compute-2 sudo[199495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkirbkeeewoqkjykhrlqmzgtkzczecaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089909.5171597-3935-231743260845193/AnsiballZ_stat.py'
Jan 22 13:51:49 compute-2 sudo[199495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:50 compute-2 python3.9[199497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:50 compute-2 sudo[199495]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:50 compute-2 sudo[199635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jawoqwxlszaoeqzsugcmibhvvncityqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089909.5171597-3935-231743260845193/AnsiballZ_copy.py'
Jan 22 13:51:50 compute-2 sudo[199635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:50 compute-2 podman[199595]: 2026-01-22 13:51:50.417210722 +0000 UTC m=+0.049000527 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 13:51:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:50.604+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:50 compute-2 python3.9[199643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089909.5171597-3935-231743260845193/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:50 compute-2 sudo[199635]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:50 compute-2 ceph-mon[77081]: pgmap v702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:50 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:50 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:51.569+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:51 compute-2 sudo[199793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffyewrylvbbzpiukeltmxlddieimeegg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089911.6933727-3979-74371899477277/AnsiballZ_file.py'
Jan 22 13:51:51 compute-2 sudo[199793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-2 ceph-mon[77081]: pgmap v703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-2 python3.9[199795]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:52 compute-2 sudo[199793]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:52.619+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:52 compute-2 sudo[199946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-donkcjxnylodascwtmhagsjxvjndhbtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089912.5078003-4004-88613563721364/AnsiballZ_command.py'
Jan 22 13:51:52 compute-2 sudo[199946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:52 compute-2 python3.9[199948]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:52 compute-2 sudo[199946]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-2 sudo[199953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:53 compute-2 sudo[199953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-2 sudo[199953]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-2 sudo[200001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:51:53 compute-2 sudo[200001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-2 sudo[200001]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-2 sudo[200026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:53 compute-2 sudo[200026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-2 sudo[200026]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-2 sudo[200051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:51:53 compute-2 sudo[200051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:53.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:53.572+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:53 compute-2 ceph-mon[77081]: pgmap v704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:53 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:53 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 904 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:51:53 compute-2 sudo[200051]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:53 compute-2 sudo[200233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbpnwgxgcbtzgrtgyagscwechapqrmrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089913.447465-4029-231881868345044/AnsiballZ_blockinfile.py'
Jan 22 13:51:53 compute-2 sudo[200233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:54 compute-2 python3.9[200235]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:54 compute-2 sudo[200233]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:54 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:51:54 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:51:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:54.622+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:54 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:54 compute-2 sudo[200387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-styefobjjznzjuwqjtmcvqeqtukeazlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089914.6517248-4056-125275143320117/AnsiballZ_command.py'
Jan 22 13:51:54 compute-2 sudo[200387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:55 compute-2 python3.9[200389]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:55 compute-2 sudo[200387]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:55.606+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:55 compute-2 sudo[200540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olbvpyymzjkupslbthkdnwdpmpizzwoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089915.4766307-4080-12636630880923/AnsiballZ_stat.py'
Jan 22 13:51:55 compute-2 sudo[200540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:55 compute-2 python3.9[200542]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:51:55 compute-2 sudo[200540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:56 compute-2 ceph-mon[77081]: pgmap v705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:56.621+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:56 compute-2 sudo[200695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpinkypyskkrplvqwtglnwnujidhuwau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089916.481707-4103-111688778419030/AnsiballZ_command.py'
Jan 22 13:51:56 compute-2 sudo[200695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:51:57 compute-2 python3.9[200697]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:51:57 compute-2 sudo[200695]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:57.607+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:57 compute-2 sudo[200850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnuxgbmeeelsqswripijnfclwtirhxmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089917.5065-4127-172591832684259/AnsiballZ_file.py'
Jan 22 13:51:57 compute-2 sudo[200850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:57 compute-2 python3.9[200852]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:57 compute-2 sudo[200850]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:58 compute-2 sudo[200853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:58 compute-2 sudo[200853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:58 compute-2 sudo[200853]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:58 compute-2 sudo[200901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:51:58 compute-2 sudo[200901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:51:58 compute-2 sudo[200901]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:58 compute-2 ceph-mon[77081]: pgmap v706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:58.562+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:58 compute-2 sudo[201053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjnoudtweptkhxkwwfjoapurhybvewda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089918.2983782-4151-192221711211481/AnsiballZ_stat.py'
Jan 22 13:51:58 compute-2 sudo[201053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:58 compute-2 python3.9[201055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:51:58 compute-2 sudo[201053]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:59 compute-2 sudo[201176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpfyebgdzksnracaqfhfqkhqzudymhgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089918.2983782-4151-192221711211481/AnsiballZ_copy.py'
Jan 22 13:51:59 compute-2 sudo[201176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:51:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:51:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:51:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:51:59 compute-2 python3.9[201178]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089918.2983782-4151-192221711211481/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:51:59 compute-2 sudo[201176]: pam_unix(sudo:session): session closed for user root
Jan 22 13:51:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:59.566+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:51:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:51:59 compute-2 ceph-mon[77081]: pgmap v707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:51:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:51:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:51:59 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:00 compute-2 sudo[201328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqvnetyudxsqzgssofrryfpebosbadup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089919.7510564-4196-246123384437746/AnsiballZ_stat.py'
Jan 22 13:52:00 compute-2 sudo[201328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:00 compute-2 python3.9[201330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:00 compute-2 sudo[201328]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:52:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:52:00 compute-2 sudo[201452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbnwgsbnbseffjzrbkcvwgkeyxvqjulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089919.7510564-4196-246123384437746/AnsiballZ_copy.py'
Jan 22 13:52:00 compute-2 sudo[201452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:00.611+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:00 compute-2 python3.9[201454]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089919.7510564-4196-246123384437746/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:00 compute-2 sudo[201452]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:52:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:52:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:52:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:52:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:52:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:52:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:01.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:01.656+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:02 compute-2 ceph-mon[77081]: pgmap v708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:02.652+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:02 compute-2 sudo[201605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsldevsfpskpugbuduowcayxxuckxsee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089921.289507-4241-198919477861700/AnsiballZ_stat.py'
Jan 22 13:52:02 compute-2 sudo[201605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:03 compute-2 python3.9[201607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:03 compute-2 sudo[201605]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:03.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:03 compute-2 sudo[201728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxxxnxusswqthhlccgbjfjrcdrnvzze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089921.289507-4241-198919477861700/AnsiballZ_copy.py'
Jan 22 13:52:03 compute-2 sudo[201728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:03 compute-2 python3.9[201730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089921.289507-4241-198919477861700/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:03 compute-2 sudo[201728]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:03.663+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:03 compute-2 ceph-mon[77081]: pgmap v709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:03 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:04 compute-2 sudo[201881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzgzttdckwuenrwdknhupptichpfqxlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089924.0411353-4285-39455816421562/AnsiballZ_systemd.py'
Jan 22 13:52:04 compute-2 sudo[201881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:04.632+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:04 compute-2 python3.9[201883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:52:04 compute-2 systemd[1]: Reloading.
Jan 22 13:52:04 compute-2 systemd-rc-local-generator[201909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:04 compute-2 systemd-sysv-generator[201912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:05 compute-2 systemd[1]: Reached target edpm_libvirt.target.
Jan 22 13:52:05 compute-2 sudo[201881]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:05 compute-2 podman[201919]: 2026-01-22 13:52:05.082895578 +0000 UTC m=+0.089844713 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 13:52:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:05.598+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:06 compute-2 sudo[202095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djirihblymrwzgxmqglhlxdregqghfsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089925.8456042-4310-9834545557190/AnsiballZ_systemd.py'
Jan 22 13:52:06 compute-2 sudo[202095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:06.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:06 compute-2 python3.9[202097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 13:52:06 compute-2 systemd[1]: Reloading.
Jan 22 13:52:06 compute-2 systemd-sysv-generator[202130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:06 compute-2 systemd-rc-local-generator[202126]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:06.634+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:06 compute-2 systemd[1]: Reloading.
Jan 22 13:52:06 compute-2 systemd-rc-local-generator[202161]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:06 compute-2 systemd-sysv-generator[202165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:07 compute-2 sudo[202095]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:52:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:52:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:07.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:08 compute-2 ceph-mon[77081]: pgmap v710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:08.711+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:09 compute-2 ceph-mon[77081]: pgmap v711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-2 ceph-mon[77081]: pgmap v712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:09 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:09.714+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:09 compute-2 sshd-session[144232]: Connection closed by 192.168.122.30 port 34248
Jan 22 13:52:09 compute-2 sshd-session[144229]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:52:09 compute-2 systemd[1]: session-48.scope: Deactivated successfully.
Jan 22 13:52:09 compute-2 systemd[1]: session-48.scope: Consumed 3min 24.178s CPU time.
Jan 22 13:52:09 compute-2 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Jan 22 13:52:09 compute-2 systemd-logind[787]: Removed session 48.
Jan 22 13:52:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:10.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:10.721+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000089s ======
Jan 22 13:52:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:11.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000089s
Jan 22 13:52:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:11.677+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:12.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:13 compute-2 ceph-mon[77081]: pgmap v713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:13.682+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:13 compute-2 sudo[202199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:13 compute-2 sudo[202199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:13 compute-2 sudo[202199]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:13 compute-2 sudo[202224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:52:13 compute-2 sudo[202224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:13 compute-2 sudo[202224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 13:52:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 13:52:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:14 compute-2 ceph-mon[77081]: pgmap v714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:52:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:14 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:52:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:14.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:15 compute-2 sshd-session[202250]: Accepted publickey for zuul from 192.168.122.30 port 59414 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:52:15 compute-2 systemd-logind[787]: New session 49 of user zuul.
Jan 22 13:52:15 compute-2 systemd[1]: Started Session 49 of User zuul.
Jan 22 13:52:15 compute-2 sshd-session[202250]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:52:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 13:52:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:15.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 13:52:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:15.640+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:16 compute-2 ceph-mon[77081]: pgmap v715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:16 compute-2 python3.9[202403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:52:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:16.633+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:52:17 compute-2 ceph-mon[77081]: pgmap v716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:17 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:17.632+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:18 compute-2 sudo[202559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:18 compute-2 sudo[202559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:18 compute-2 sudo[202559]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:18 compute-2 python3.9[202558]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:52:18 compute-2 network[202623]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:52:18 compute-2 network[202624]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:52:18 compute-2 network[202625]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:52:18 compute-2 sudo[202586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:18 compute-2 sudo[202586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:18 compute-2 sudo[202586]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:18.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:18.648+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.678401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938678460, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 763, "num_deletes": 250, "total_data_size": 1334039, "memory_usage": 1355136, "flush_reason": "Manual Compaction"}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938767039, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 868137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18155, "largest_seqno": 18913, "table_properties": {"data_size": 864527, "index_size": 1390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8225, "raw_average_key_size": 17, "raw_value_size": 856944, "raw_average_value_size": 1862, "num_data_blocks": 61, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089892, "oldest_key_time": 1769089892, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 88765 microseconds, and 5161 cpu microseconds.
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.767175) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 868137 bytes OK
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.767218) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768219) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768234) EVENT_LOG_v1 {"time_micros": 1769089938768230, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768250) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1329876, prev total WAL file size 1346271, number of live WAL files 2.
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.769034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(847KB)], [33(7245KB)]
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938769087, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 8287244, "oldest_snapshot_seqno": -1}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5194 keys, 7744059 bytes, temperature: kUnknown
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938978081, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7744059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711088, "index_size": 18909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133612, "raw_average_key_size": 25, "raw_value_size": 7618246, "raw_average_value_size": 1466, "num_data_blocks": 757, "num_entries": 5194, "num_filter_entries": 5194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.978391) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7744059 bytes
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.982889) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.6 rd, 37.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(18.5) write-amplify(8.9) OK, records in: 5706, records dropped: 512 output_compression: NoCompression
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.982911) EVENT_LOG_v1 {"time_micros": 1769089938982899, "job": 18, "event": "compaction_finished", "compaction_time_micros": 209148, "compaction_time_cpu_micros": 17973, "output_level": 6, "num_output_files": 1, "total_output_size": 7744059, "num_input_records": 5706, "num_output_records": 5194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938983202, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938984637, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:52:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:19.610+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:19 compute-2 ceph-mon[77081]: pgmap v717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:19 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:20.616+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:20 compute-2 podman[202664]: 2026-01-22 13:52:20.679055548 +0000 UTC m=+0.055052051 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:52:21 compute-2 ceph-mon[77081]: pgmap v718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:21.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:21.568+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:52:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:22.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:52:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:22.617+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:23 compute-2 sudo[202919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uivqfeotrdzwvighopdfxzoamhipcxsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089942.7142413-103-65313859483837/AnsiballZ_setup.py'
Jan 22 13:52:23 compute-2 sudo[202919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:23.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:23 compute-2 python3.9[202921]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 13:52:23 compute-2 ceph-mon[77081]: pgmap v719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:23.628+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:23 compute-2 sudo[202919]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:24 compute-2 sudo[203003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dletvwlaspwrieusnvpafpbvjiwnasct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089942.7142413-103-65313859483837/AnsiballZ_dnf.py'
Jan 22 13:52:24 compute-2 sudo[203003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:24 compute-2 python3.9[203005]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:52:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:24.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:25.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:25.616+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:25 compute-2 ceph-mon[77081]: pgmap v720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:26.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:26.641+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:27.645+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:28 compute-2 ceph-mon[77081]: pgmap v721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:28.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:28.610+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:29.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:29.607+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:29 compute-2 sudo[203003]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:29 compute-2 ceph-mon[77081]: pgmap v722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:29 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:30.575+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:30 compute-2 sudo[203160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kejccecsdexakxyqlcobkvtcdsqtkwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089950.3608391-140-150444651512951/AnsiballZ_stat.py'
Jan 22 13:52:30 compute-2 sudo[203160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:30 compute-2 python3.9[203162]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:52:30 compute-2 sudo[203160]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:31.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:31.590+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:31 compute-2 ceph-mon[77081]: pgmap v723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:31 compute-2 sudo[203312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxqiovkgrpzifqacrvujjsdcdnueveok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089951.5482872-171-126606780143879/AnsiballZ_command.py'
Jan 22 13:52:31 compute-2 sudo[203312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:32 compute-2 python3.9[203314]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:52:32 compute-2 sudo[203312]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:32.578+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:32 compute-2 ceph-mon[77081]: pgmap v724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:32 compute-2 sudo[203466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-styykbypjoyridcqzgdvoskpmwgpjigs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089952.6631267-200-244453666791015/AnsiballZ_stat.py'
Jan 22 13:52:32 compute-2 sudo[203466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:33 compute-2 python3.9[203468]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:52:33 compute-2 sudo[203466]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:33.532+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:33 compute-2 sudo[203618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erilhpitnupgowhrberpxpbgdwfrbdik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089953.523088-224-141417454254171/AnsiballZ_command.py'
Jan 22 13:52:33 compute-2 sudo[203618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:33 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:34 compute-2 python3.9[203620]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:52:34 compute-2 sudo[203618]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:52:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:52:34 compute-2 sudo[203772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahklotsdqgypdrotvxabviknithpxuzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089954.241237-248-18983675206636/AnsiballZ_stat.py'
Jan 22 13:52:34 compute-2 sudo[203772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:34.539+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:34 compute-2 python3.9[203774]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:34 compute-2 sudo[203772]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:35 compute-2 sudo[203904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzocymtvdxczvuubwukzbsxuxbhjllio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089954.241237-248-18983675206636/AnsiballZ_copy.py'
Jan 22 13:52:35 compute-2 sudo[203904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:35 compute-2 podman[203869]: 2026-01-22 13:52:35.340838099 +0000 UTC m=+0.096989403 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 13:52:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:35.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:35 compute-2 ceph-mon[77081]: pgmap v725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:35 compute-2 python3.9[203911]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089954.241237-248-18983675206636/.source.iscsi _original_basename=.zlma7wjw follow=False checksum=ac6eeee5c3166b111e4e31f108595919a1a56d1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:35 compute-2 sudo[203904]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:35.559+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:36 compute-2 sudo[204070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfcquevobjruhjbttgobyxzukqissdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089955.7185667-293-118956393533794/AnsiballZ_file.py'
Jan 22 13:52:36 compute-2 sudo[204070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:36 compute-2 python3.9[204073]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:36.559+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:36 compute-2 sudo[204070]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:37 compute-2 sudo[204223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hytvmyboyxspdblgkpnbxfnulmkupzfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089956.8490255-317-36242812935096/AnsiballZ_lineinfile.py'
Jan 22 13:52:37 compute-2 sudo[204223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:37 compute-2 python3.9[204225]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:52:37 compute-2 sudo[204223]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:37.576+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:37 compute-2 ceph-mon[77081]: pgmap v726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:38.534+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:38 compute-2 sudo[204376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlupatjfvfbinhyrxwzexldzmxeeynp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089957.9537354-345-129043556705024/AnsiballZ_systemd_service.py'
Jan 22 13:52:38 compute-2 sudo[204376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:38 compute-2 python3.9[204378]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:52:38 compute-2 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 22 13:52:39 compute-2 sudo[204376]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:39 compute-2 sudo[204383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:39 compute-2 sudo[204383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:39 compute-2 sudo[204383]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:39 compute-2 sudo[204432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:39 compute-2 sudo[204432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:39 compute-2 sudo[204432]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:39.517+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:39 compute-2 ceph-mon[77081]: pgmap v727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:40.500+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:40 compute-2 sudo[204583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckuedohfjzhudbbdltjelcfstrzszgfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089960.2347689-367-269772884167355/AnsiballZ_systemd_service.py'
Jan 22 13:52:40 compute-2 sudo[204583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:40 compute-2 python3.9[204585]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:52:40 compute-2 systemd[1]: Reloading.
Jan 22 13:52:40 compute-2 systemd-rc-local-generator[204614]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:40 compute-2 systemd-sysv-generator[204617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:41 compute-2 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 13:52:41 compute-2 systemd[1]: Starting Open-iSCSI...
Jan 22 13:52:41 compute-2 kernel: Loading iSCSI transport class v2.0-870.
Jan 22 13:52:41 compute-2 systemd[1]: Started Open-iSCSI.
Jan 22 13:52:41 compute-2 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 22 13:52:41 compute-2 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 22 13:52:41 compute-2 sudo[204583]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:41.451+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:41 compute-2 ceph-mon[77081]: pgmap v728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:41 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:42.403+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:42 compute-2 ceph-mon[77081]: pgmap v729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:42 compute-2 python3.9[204785]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:52:42 compute-2 network[204802]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:52:42 compute-2 network[204803]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:52:42 compute-2 network[204804]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:52:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:43.353+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:44.322+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:44 compute-2 ceph-mon[77081]: pgmap v730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:44 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:45.372+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:46.406+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:52:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:52:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:52:47.158 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:52:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:52:47.158 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:52:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:52:47.158 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:52:47 compute-2 ceph-mon[77081]: pgmap v731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:47.368+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:48.357+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:48 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:49.389+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:49 compute-2 ceph-mon[77081]: pgmap v732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:49 compute-2 sudo[205077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioenibyprwwchwoncnkuqowespxuqjwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089969.6904752-437-59617682234369/AnsiballZ_dnf.py'
Jan 22 13:52:49 compute-2 sudo[205077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:50 compute-2 python3.9[205079]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:52:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:50.344+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:51 compute-2 podman[205082]: 2026-01-22 13:52:51.004286138 +0000 UTC m=+0.067030201 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 13:52:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:51.384+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:52 compute-2 ceph-mon[77081]: pgmap v733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:52.365+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:52.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:53 compute-2 ceph-mon[77081]: pgmap v734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:52:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:52:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:53.405+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:53 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:52:53 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:52:53 compute-2 systemd[1]: Reloading.
Jan 22 13:52:53 compute-2 systemd-rc-local-generator[205139]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:52:53 compute-2 systemd-sysv-generator[205146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:52:53 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:52:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:54.377+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:52:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:52:54 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:55 compute-2 sudo[205077]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:55 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:52:55 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:52:55 compute-2 systemd[1]: run-r61bf3cef94aa40e1a45e9be128813cc7.service: Deactivated successfully.
Jan 22 13:52:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:55.408+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:55 compute-2 ceph-mon[77081]: pgmap v735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:56.453+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:56 compute-2 sudo[205416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwxeajljqaknchhgqhhyjpgkopnduhgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089975.6023293-463-198378982732681/AnsiballZ_file.py'
Jan 22 13:52:56 compute-2 sudo[205416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:52:57 compute-2 python3.9[205418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 13:52:57 compute-2 sudo[205416]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:57.421+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:57 compute-2 ceph-mon[77081]: pgmap v736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:57 compute-2 sudo[205568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdrygnzzuwtlxdlxildprayofurokgfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089977.5085654-488-259733481827889/AnsiballZ_modprobe.py'
Jan 22 13:52:57 compute-2 sudo[205568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:58 compute-2 python3.9[205570]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 22 13:52:58 compute-2 sudo[205568]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:52:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:52:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:58.461+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:58 compute-2 ceph-mon[77081]: pgmap v737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:52:58 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:52:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:59 compute-2 sudo[205733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zasmwaikjxyrkftajdeyrxwbybknoaur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089978.8074253-514-82993534823883/AnsiballZ_stat.py'
Jan 22 13:52:59 compute-2 sudo[205733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:59 compute-2 sudo[205719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:59 compute-2 sudo[205719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:59 compute-2 sudo[205719]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:59 compute-2 sudo[205753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:52:59 compute-2 sudo[205753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:52:59 compute-2 sudo[205753]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:52:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:52:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:52:59 compute-2 python3.9[205750]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:52:59 compute-2 sudo[205733]: pam_unix(sudo:session): session closed for user root
Jan 22 13:52:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:59.426+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:52:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:59 compute-2 sudo[205898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffhlrmhclokmqbsccfnbqkxffqjsztte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089978.8074253-514-82993534823883/AnsiballZ_copy.py'
Jan 22 13:52:59 compute-2 sudo[205898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:52:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:52:59 compute-2 python3.9[205900]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089978.8074253-514-82993534823883/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:00 compute-2 sudo[205898]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:00.447+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:00.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:00 compute-2 sudo[206051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcxpvapfzpklnxiekclfqtdvmlnhsuvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089980.4042249-559-258659799839422/AnsiballZ_lineinfile.py'
Jan 22 13:53:00 compute-2 sudo[206051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:01 compute-2 python3.9[206053]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:01 compute-2 sudo[206051]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:01 compute-2 ceph-mon[77081]: pgmap v738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:01 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:01.399+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:01 compute-2 sudo[206203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxidpcmyakurfacstanbxtxmuvzskcec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089981.3025885-583-82535087830404/AnsiballZ_systemd.py'
Jan 22 13:53:01 compute-2 sudo[206203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:02 compute-2 python3.9[206205]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:02 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 13:53:02 compute-2 systemd[1]: Stopped Load Kernel Modules.
Jan 22 13:53:02 compute-2 systemd[1]: Stopping Load Kernel Modules...
Jan 22 13:53:02 compute-2 systemd[1]: Starting Load Kernel Modules...
Jan 22 13:53:02 compute-2 systemd[1]: Finished Load Kernel Modules.
Jan 22 13:53:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:02.359+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:02 compute-2 sudo[206203]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:03 compute-2 sudo[206360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yngekxswlowmklsvaiuqyjpmecsckjbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089982.7968585-608-196500572798231/AnsiballZ_command.py'
Jan 22 13:53:03 compute-2 sudo[206360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:03.320+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:03 compute-2 python3.9[206362]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:53:03 compute-2 sudo[206360]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:03 compute-2 ceph-mon[77081]: pgmap v739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:03 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:04 compute-2 sudo[206513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucigddgncqfedpgcmzcqyeowpvokxupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089983.8868814-638-75570631599131/AnsiballZ_stat.py'
Jan 22 13:53:04 compute-2 sudo[206513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:04.360+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:04 compute-2 python3.9[206515]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:53:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:53:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:04.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:53:04 compute-2 sudo[206513]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:05 compute-2 ceph-mon[77081]: pgmap v740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:05 compute-2 sudo[206666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmllwyvblfnuxyxvdykcuuiinwroxnwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089984.8598263-664-102599571371128/AnsiballZ_stat.py'
Jan 22 13:53:05 compute-2 sudo[206666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:05.357+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:05 compute-2 python3.9[206668]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:53:05 compute-2 sudo[206666]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:05 compute-2 sudo[206807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpuhyyzqspfkhaakdorkwreaottjqula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089984.8598263-664-102599571371128/AnsiballZ_copy.py'
Jan 22 13:53:05 compute-2 sudo[206807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:05 compute-2 podman[206763]: 2026-01-22 13:53:05.925637739 +0000 UTC m=+0.128529958 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 13:53:06 compute-2 python3.9[206811]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089984.8598263-664-102599571371128/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:06 compute-2 sudo[206807]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:06.328+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:06.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:07 compute-2 sudo[206968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxvqqkpzuclkcqhqrjapldjihxbzvvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089986.6328948-710-105685316810670/AnsiballZ_command.py'
Jan 22 13:53:07 compute-2 sudo[206968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:07 compute-2 python3.9[206970]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:53:07 compute-2 sudo[206968]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:07.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:07 compute-2 ceph-mon[77081]: pgmap v741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:07 compute-2 sudo[207121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qujpgswwjpbrpszagrwmltahwrawnkcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089987.4764185-734-125721787735823/AnsiballZ_lineinfile.py'
Jan 22 13:53:07 compute-2 sudo[207121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:07 compute-2 python3.9[207123]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:08 compute-2 sudo[207121]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:08.314+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:08 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:08 compute-2 sudo[207274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfklbnkebsvqyfmqsnerddkdyltcxzjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089988.3014736-758-183307742913691/AnsiballZ_replace.py'
Jan 22 13:53:08 compute-2 sudo[207274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:09 compute-2 python3.9[207276]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:09 compute-2 sudo[207274]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:09.355+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:09 compute-2 ceph-mon[77081]: pgmap v742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:09 compute-2 sudo[207426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydrkrzhtprncbtmetwtiiilgdyzcbqjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089989.291389-782-58511492799722/AnsiballZ_replace.py'
Jan 22 13:53:09 compute-2 sudo[207426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:09 compute-2 python3.9[207428]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:09 compute-2 sudo[207426]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:10.394+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:10 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:10 compute-2 sudo[207579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgcfisyxqvtvuswnohyxbfxbojrrgeeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089990.475159-809-104947867916111/AnsiballZ_lineinfile.py'
Jan 22 13:53:10 compute-2 sudo[207579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:11 compute-2 python3.9[207581]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:11 compute-2 sudo[207579]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:53:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:53:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:11.431+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:11 compute-2 sudo[207731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raakuevjkdqranuuzybznirpeeutygjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089991.2190616-809-145320646827827/AnsiballZ_lineinfile.py'
Jan 22 13:53:11 compute-2 sudo[207731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:11 compute-2 ceph-mon[77081]: pgmap v743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:11 compute-2 python3.9[207733]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:11 compute-2 sudo[207731]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:12 compute-2 sudo[207883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noefmvdhfdwassfneertdryeodomytbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089991.9078226-809-92012878509426/AnsiballZ_lineinfile.py'
Jan 22 13:53:12 compute-2 sudo[207883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:12 compute-2 python3.9[207885]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:12.397+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:12 compute-2 sudo[207883]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:12 compute-2 sudo[208036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpqatzfbheoiguccejmmafegmftsgrmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089992.560397-809-270502566669401/AnsiballZ_lineinfile.py'
Jan 22 13:53:12 compute-2 sudo[208036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:13 compute-2 python3.9[208038]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:13 compute-2 sudo[208036]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:13.388+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:13 compute-2 ceph-mon[77081]: pgmap v744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:13 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:14 compute-2 sudo[208166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:14 compute-2 sudo[208211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botygenrslapdxbmhwzbqinoospqcboh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089993.6574218-896-126837661138556/AnsiballZ_stat.py'
Jan 22 13:53:14 compute-2 sudo[208166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-2 sudo[208211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:14 compute-2 sudo[208166]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-2 sudo[208216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:53:14 compute-2 sudo[208216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-2 sudo[208216]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-2 sudo[208241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:14 compute-2 python3.9[208215]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:53:14 compute-2 sudo[208241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-2 sudo[208241]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-2 sudo[208211]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:14 compute-2 sudo[208267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:53:14 compute-2 sudo[208267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:14.418+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:14 compute-2 ceph-mon[77081]: pgmap v745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:14 compute-2 sudo[208267]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-2 sudo[208474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvflpfacxnmqdzttfhyoldbyviynkevl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089994.7191854-920-196771993911160/AnsiballZ_command.py'
Jan 22 13:53:15 compute-2 sudo[208474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:15 compute-2 python3.9[208476]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:53:15 compute-2 sudo[208474]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:15.460+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:15 compute-2 sudo[208627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrcvmpanlmrfaxsxynurxeuzolqjgave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089995.6662898-947-139188944367585/AnsiballZ_systemd_service.py'
Jan 22 13:53:15 compute-2 sudo[208627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:16 compute-2 python3.9[208629]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:16 compute-2 systemd[1]: Listening on multipathd control socket.
Jan 22 13:53:16 compute-2 sudo[208627]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:16.450+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:17 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:53:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:53:17 compute-2 sudo[208784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbkmzgxcflwrwnsggmattddwzpsfxcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089996.6764529-971-169703849285814/AnsiballZ_systemd_service.py'
Jan 22 13:53:17 compute-2 sudo[208784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:17 compute-2 python3.9[208786]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:17 compute-2 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 22 13:53:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:17.477+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:17 compute-2 udevadm[208791]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 22 13:53:17 compute-2 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 22 13:53:17 compute-2 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 13:53:17 compute-2 multipathd[208794]: --------start up--------
Jan 22 13:53:17 compute-2 multipathd[208794]: read /etc/multipath.conf
Jan 22 13:53:17 compute-2 multipathd[208794]: path checkers start up
Jan 22 13:53:17 compute-2 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 13:53:17 compute-2 sudo[208784]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:18 compute-2 ceph-mon[77081]: pgmap v746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:53:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:53:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:53:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:18.479+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:18 compute-2 sudo[208952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fixmgdhpxlssydvptbzvdpovczobhfud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089998.5358593-1007-281065861020794/AnsiballZ_file.py'
Jan 22 13:53:18 compute-2 sudo[208952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:19 compute-2 ceph-mon[77081]: pgmap v747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:19 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:19 compute-2 python3.9[208954]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 13:53:19 compute-2 sudo[208952]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:19 compute-2 sudo[208979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:19 compute-2 sudo[208979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:19 compute-2 sudo[208979]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:19 compute-2 sudo[209007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:19 compute-2 sudo[209007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:19 compute-2 sudo[209007]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:19.440+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:19 compute-2 sudo[209154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhwceqwmavazanrxszpqewpvuredxcny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769089999.4251337-1032-258328653182782/AnsiballZ_modprobe.py'
Jan 22 13:53:19 compute-2 sudo[209154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:19 compute-2 python3.9[209156]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 22 13:53:19 compute-2 kernel: Key type psk registered
Jan 22 13:53:19 compute-2 sudo[209154]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:20.436+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:53:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:53:20 compute-2 sudo[209317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjtxtsuneztqomuxnzhalzrleubiqdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090000.232714-1056-131343871564022/AnsiballZ_stat.py'
Jan 22 13:53:20 compute-2 sudo[209317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:20 compute-2 python3.9[209319]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:53:20 compute-2 sudo[209317]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:21 compute-2 sudo[209451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpopelanfusjprbfxsasvfwcmlbhjedj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090000.232714-1056-131343871564022/AnsiballZ_copy.py'
Jan 22 13:53:21 compute-2 sudo[209451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:21 compute-2 podman[209414]: 2026-01-22 13:53:21.273371422 +0000 UTC m=+0.096263549 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 13:53:21 compute-2 ceph-mon[77081]: pgmap v748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 13:53:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 13:53:21 compute-2 python3.9[209453]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769090000.232714-1056-131343871564022/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:21.436+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:21 compute-2 sudo[209451]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:22 compute-2 sudo[209610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxpoxvztqomuqurquampccjktcmusgin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090001.8358462-1103-265189735460256/AnsiballZ_lineinfile.py'
Jan 22 13:53:22 compute-2 sudo[209610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:22 compute-2 python3.9[209612]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:22 compute-2 sudo[209610]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:22.390+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:22 compute-2 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 22 13:53:23 compute-2 sudo[209764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nixvaihhduybtqumeqioqmacukndkhhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090002.6490026-1127-207090379742061/AnsiballZ_systemd.py'
Jan 22 13:53:23 compute-2 sudo[209764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:23 compute-2 python3.9[209766]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:23 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 13:53:23 compute-2 systemd[1]: Stopped Load Kernel Modules.
Jan 22 13:53:23 compute-2 systemd[1]: Stopping Load Kernel Modules...
Jan 22 13:53:23 compute-2 systemd[1]: Starting Load Kernel Modules...
Jan 22 13:53:23 compute-2 systemd[1]: Finished Load Kernel Modules.
Jan 22 13:53:23 compute-2 sudo[209764]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:23.392+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:23.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:23 compute-2 ceph-mon[77081]: pgmap v749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:23 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:53:23 compute-2 sudo[209795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:23 compute-2 sudo[209795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:23 compute-2 sudo[209795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:23 compute-2 sudo[209820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:53:23 compute-2 sudo[209820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:23 compute-2 sudo[209820]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:23 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 13:53:23 compute-2 sudo[209971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvaaucnyyxbxomijxptofxljcfhyjgwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090003.7189245-1150-22768746747508/AnsiballZ_dnf.py'
Jan 22 13:53:23 compute-2 sudo[209971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:24 compute-2 python3.9[209973]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 13:53:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:24.366+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:25.416+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:25 compute-2 ceph-mon[77081]: pgmap v750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:26.401+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:26 compute-2 systemd[1]: Reloading.
Jan 22 13:53:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:26 compute-2 systemd-sysv-generator[210012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:26 compute-2 systemd-rc-local-generator[210007]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:26 compute-2 systemd[1]: Reloading.
Jan 22 13:53:26 compute-2 systemd-rc-local-generator[210041]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:26 compute-2 systemd-sysv-generator[210045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:27 compute-2 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 13:53:27 compute-2 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 13:53:27 compute-2 lvm[210086]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 13:53:27 compute-2 lvm[210086]: VG ceph_vg0 finished
Jan 22 13:53:27 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 13:53:27 compute-2 systemd[1]: Starting man-db-cache-update.service...
Jan 22 13:53:27 compute-2 systemd[1]: Reloading.
Jan 22 13:53:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:27.380+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:27 compute-2 systemd-rc-local-generator[210139]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:27 compute-2 systemd-sysv-generator[210142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:27 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 13:53:27 compute-2 ceph-mon[77081]: pgmap v751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:28 compute-2 sudo[209971]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:28 compute-2 sshd-session[210397]: Invalid user node from 92.118.39.95 port 45886
Jan 22 13:53:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:28.338+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:28 compute-2 sshd-session[210397]: Connection closed by invalid user node 92.118.39.95 port 45886 [preauth]
Jan 22 13:53:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 13:53:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 13:53:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:28 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:29 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 13:53:29 compute-2 systemd[1]: Finished man-db-cache-update.service.
Jan 22 13:53:29 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.279s CPU time.
Jan 22 13:53:29 compute-2 systemd[1]: run-rc6463b1310544d1999f86623272abec1.service: Deactivated successfully.
Jan 22 13:53:29 compute-2 sudo[211443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkdwsooiuwzeyskvkuwdunjjwkeaqrmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090009.0268042-1175-44771457693169/AnsiballZ_systemd_service.py'
Jan 22 13:53:29 compute-2 sudo[211443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:29.321+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:29 compute-2 python3.9[211445]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:29 compute-2 systemd[1]: Stopping Open-iSCSI...
Jan 22 13:53:29 compute-2 iscsid[204625]: iscsid shutting down.
Jan 22 13:53:29 compute-2 systemd[1]: iscsid.service: Deactivated successfully.
Jan 22 13:53:29 compute-2 systemd[1]: Stopped Open-iSCSI.
Jan 22 13:53:29 compute-2 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 13:53:29 compute-2 systemd[1]: Starting Open-iSCSI...
Jan 22 13:53:29 compute-2 systemd[1]: Started Open-iSCSI.
Jan 22 13:53:29 compute-2 sudo[211443]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:29 compute-2 ceph-mon[77081]: pgmap v752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:30.281+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 13:53:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 13:53:30 compute-2 sudo[211600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymiippxxeyiquannrsrikxcziaeaukd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090010.4566548-1199-47175830730956/AnsiballZ_systemd_service.py'
Jan 22 13:53:30 compute-2 sudo[211600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:31 compute-2 ceph-mon[77081]: pgmap v753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:31 compute-2 python3.9[211602]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:53:31 compute-2 multipathd[208794]: exit (signal)
Jan 22 13:53:31 compute-2 multipathd[208794]: --------shut down-------
Jan 22 13:53:31 compute-2 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 22 13:53:31 compute-2 systemd[1]: multipathd.service: Deactivated successfully.
Jan 22 13:53:31 compute-2 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 22 13:53:31 compute-2 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 13:53:31 compute-2 multipathd[211608]: --------start up--------
Jan 22 13:53:31 compute-2 multipathd[211608]: read /etc/multipath.conf
Jan 22 13:53:31 compute-2 multipathd[211608]: path checkers start up
Jan 22 13:53:31 compute-2 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 13:53:31 compute-2 sudo[211600]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:31.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:31 compute-2 python3.9[211765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 13:53:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:32.281+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 13:53:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:32.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 13:53:33 compute-2 sudo[211920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rofakpcdqjdmnvycshfpivayxflgnshu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090012.7172325-1251-11438211398889/AnsiballZ_file.py'
Jan 22 13:53:33 compute-2 sudo[211920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:33 compute-2 python3.9[211922]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:33 compute-2 sudo[211920]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:33.286+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:33 compute-2 ceph-mon[77081]: pgmap v754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:33 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:34 compute-2 sudo[212072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akzultsdxzzjggdwafelofxpjbljdzdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090013.7787535-1284-270479460096414/AnsiballZ_systemd_service.py'
Jan 22 13:53:34 compute-2 sudo[212072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:34.287+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:34 compute-2 python3.9[212074]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:53:34 compute-2 systemd[1]: Reloading.
Jan 22 13:53:34 compute-2 systemd-rc-local-generator[212103]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:53:34 compute-2 systemd-sysv-generator[212106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:53:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:34 compute-2 sudo[212072]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:34 compute-2 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 22 13:53:34 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 13:53:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:35.255+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:35.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:35 compute-2 python3.9[212263]: ansible-ansible.builtin.service_facts Invoked
Jan 22 13:53:35 compute-2 network[212280]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 13:53:35 compute-2 network[212281]: 'network-scripts' will be removed from distribution in near future.
Jan 22 13:53:35 compute-2 network[212282]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 13:53:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:36.230+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:36 compute-2 ceph-mon[77081]: pgmap v755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 13:53:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 13:53:36 compute-2 podman[212290]: 2026-01-22 13:53:36.628403081 +0000 UTC m=+0.080697862 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 13:53:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:37.273+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 13:53:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 13:53:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:37 compute-2 ceph-mon[77081]: pgmap v756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:38.276+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:38 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:39.233+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:39.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:39 compute-2 sudo[212458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:39 compute-2 sudo[212458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:39 compute-2 sudo[212458]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:39 compute-2 sudo[212483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:39 compute-2 sudo[212483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:39 compute-2 sudo[212483]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:39 compute-2 ceph-mon[77081]: pgmap v757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:40.274+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:40 compute-2 ceph-mon[77081]: pgmap v758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:41.291+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:41.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:42.339+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:42 compute-2 sudo[212635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kefhansfyopgnjiqdcngjuzhznplkepa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090022.2321408-1344-70773655579184/AnsiballZ_systemd_service.py'
Jan 22 13:53:42 compute-2 sudo[212635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:42 compute-2 python3.9[212637]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:42 compute-2 sudo[212635]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:43 compute-2 ceph-mon[77081]: pgmap v759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:43 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:43.372+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:43 compute-2 sudo[212788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvznfgegvnqxquomijjqmpnkcmtslgxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090023.0225492-1344-244633201405939/AnsiballZ_systemd_service.py'
Jan 22 13:53:43 compute-2 sudo[212788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:43 compute-2 python3.9[212790]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:43 compute-2 sudo[212788]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:44 compute-2 sudo[212941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkgzjqnctujlkchmjexnsasqrjfbjsej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090023.9376402-1344-206763250055372/AnsiballZ_systemd_service.py'
Jan 22 13:53:44 compute-2 sudo[212941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:44.361+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:44 compute-2 python3.9[212943]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:45.340+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:45.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:45 compute-2 sudo[212941]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:46 compute-2 sudo[213095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozcmuwhyzyciairdmfpltdyqgtuqnwda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090025.725425-1344-123755061290288/AnsiballZ_systemd_service.py'
Jan 22 13:53:46 compute-2 sudo[213095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:46 compute-2 python3.9[213097]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:46 compute-2 sudo[213095]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:46.348+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:46 compute-2 sudo[213249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieoevuxpvtssswetgvzbdfmpatltllaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090026.4748025-1344-29059481613651/AnsiballZ_systemd_service.py'
Jan 22 13:53:46 compute-2 sudo[213249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:53:47.159 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:53:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:53:47.159 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:53:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:53:47.160 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:53:47 compute-2 ceph-mon[77081]: pgmap v760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:47 compute-2 python3.9[213251]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:47 compute-2 sudo[213249]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:47.393+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:47 compute-2 sudo[213402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxzwxccbazdlolwrewhvpxlukiugngt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090027.4624772-1344-277860626676972/AnsiballZ_systemd_service.py'
Jan 22 13:53:47 compute-2 sudo[213402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:48 compute-2 python3.9[213404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:48 compute-2 sudo[213402]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:48 compute-2 ceph-mon[77081]: pgmap v761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:48.398+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:48 compute-2 sudo[213555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glbkjdpejjkaktzauzsrpkmdyzkpwmfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090028.21209-1344-188361479576296/AnsiballZ_systemd_service.py'
Jan 22 13:53:48 compute-2 sudo[213555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 13:53:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 13:53:48 compute-2 python3.9[213558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:48 compute-2 sudo[213555]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:49 compute-2 sudo[213709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzdazcdodwmevxjdefdqatwakqrjtepc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090028.9720073-1344-106883520000810/AnsiballZ_systemd_service.py'
Jan 22 13:53:49 compute-2 sudo[213709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:49 compute-2 ceph-mon[77081]: pgmap v762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:49 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:49.396+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:49 compute-2 python3.9[213711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:53:49 compute-2 sudo[213709]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:50.389+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:51 compute-2 sudo[213863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cifgzfcjzaunmajkcktdbrwqgoxnueia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090030.704546-1518-266567519542203/AnsiballZ_file.py'
Jan 22 13:53:51 compute-2 sudo[213863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:51 compute-2 python3.9[213865]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:51 compute-2 sudo[213863]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:51.357+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:51 compute-2 ceph-mon[77081]: pgmap v763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:51.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:51 compute-2 sudo[214025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucmlsygnicxvsuaeqszbuivmuurlwprw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090031.4644902-1518-222831217556913/AnsiballZ_file.py'
Jan 22 13:53:51 compute-2 sudo[214025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:51 compute-2 podman[213989]: 2026-01-22 13:53:51.828220587 +0000 UTC m=+0.083849322 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 13:53:51 compute-2 python3.9[214032]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:51 compute-2 sudo[214025]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:52.326+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:52 compute-2 sudo[214185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klzyafajcnyxxatvcfzjjkhzcpobldsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090032.1309712-1518-148372890192613/AnsiballZ_file.py'
Jan 22 13:53:52 compute-2 sudo[214185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:52 compute-2 python3.9[214187]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:52 compute-2 sudo[214185]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:53 compute-2 sudo[214338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpltcbupwtiblekwsuxuyvzcgyzeucgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090032.7954683-1518-226524194611321/AnsiballZ_file.py'
Jan 22 13:53:53 compute-2 sudo[214338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:53 compute-2 python3.9[214340]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:53 compute-2 sudo[214338]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:53.337+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:53 compute-2 ceph-mon[77081]: pgmap v764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:53 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:53 compute-2 sudo[214490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyosrspeyttkskywyzjqqhtxifuhwemg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090033.4085054-1518-111780466546850/AnsiballZ_file.py'
Jan 22 13:53:53 compute-2 sudo[214490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:53 compute-2 python3.9[214492]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:53 compute-2 sudo[214490]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:54.365+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:54 compute-2 sudo[214642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcurdxkkoxufdewjdmublbxnxeulpgsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090034.1179848-1518-91251785239463/AnsiballZ_file.py'
Jan 22 13:53:54 compute-2 sudo[214642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:54 compute-2 python3.9[214644]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:54 compute-2 sudo[214642]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:55 compute-2 sudo[214795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtolaathyuzwgrltmefqesozswjszya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090034.8918502-1518-78323159735014/AnsiballZ_file.py'
Jan 22 13:53:55 compute-2 sudo[214795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:55 compute-2 python3.9[214797]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:55 compute-2 sudo[214795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:55.404+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:55 compute-2 ceph-mon[77081]: pgmap v765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:55 compute-2 sudo[214947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfnzzknsvlizxkvdlqihymhshuzahuex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090035.529088-1518-262947295818867/AnsiballZ_file.py'
Jan 22 13:53:55 compute-2 sudo[214947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:56 compute-2 python3.9[214949]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:56 compute-2 sudo[214947]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:56.379+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:53:57 compute-2 sudo[215100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-allhvzudgpjqupruytolxciogkznsssg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090037.0339518-1689-160192138619546/AnsiballZ_file.py'
Jan 22 13:53:57 compute-2 sudo[215100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:57.396+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:57.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:57 compute-2 python3.9[215102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:57 compute-2 sudo[215100]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:58 compute-2 sudo[215252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bguuxsjrqadvgdgcwllwuzpgtfbiuzwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090037.73588-1689-93996708995466/AnsiballZ_file.py'
Jan 22 13:53:58 compute-2 sudo[215252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:58 compute-2 python3.9[215254]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:58 compute-2 sudo[215252]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:58.445+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:53:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:53:58 compute-2 sudo[215405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbaavduwdgfsbhioygcyukbioittoei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090038.396015-1689-172091758039704/AnsiballZ_file.py'
Jan 22 13:53:58 compute-2 sudo[215405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:58 compute-2 python3.9[215407]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:58 compute-2 sudo[215405]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:59 compute-2 ceph-mon[77081]: pgmap v766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-2 ceph-mon[77081]: pgmap v767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:53:59 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:53:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-2 sudo[215557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whlkutddhaxuyoxffpxtcwrdmfyynjyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090039.0372806-1689-235132712764077/AnsiballZ_file.py'
Jan 22 13:53:59 compute-2 sudo[215557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:53:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:59.431+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:53:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:53:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:53:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:53:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:53:59 compute-2 python3.9[215559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:53:59 compute-2 sudo[215557]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:59 compute-2 sudo[215564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:59 compute-2 sudo[215564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:59 compute-2 sudo[215564]: pam_unix(sudo:session): session closed for user root
Jan 22 13:53:59 compute-2 sudo[215609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:53:59 compute-2 sudo[215609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:53:59 compute-2 sudo[215609]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:00 compute-2 sudo[215759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynyixbckkcktepudvgwfyvztncahtqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090039.7788022-1689-203783430868366/AnsiballZ_file.py'
Jan 22 13:54:00 compute-2 sudo[215759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:00 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:00 compute-2 python3.9[215761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:00 compute-2 sudo[215759]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:00.464+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:00 compute-2 sudo[215912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgnlgxusgiepymzalxphzhnwpiinvkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090040.423776-1689-8531913761467/AnsiballZ_file.py'
Jan 22 13:54:00 compute-2 sudo[215912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:00 compute-2 python3.9[215914]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:00 compute-2 sudo[215912]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:01 compute-2 ceph-mon[77081]: pgmap v768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:01 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:01 compute-2 sudo[216064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gelwlxuqswmismshnsbsjtnaoisnbfpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090041.0293877-1689-263022654948170/AnsiballZ_file.py'
Jan 22 13:54:01 compute-2 sudo[216064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:01.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:01.501+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:01 compute-2 python3.9[216066]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:01 compute-2 sudo[216064]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:01 compute-2 sudo[216216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niajopwgpsldrnnuerfujtvbspduznbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090041.6847253-1689-124012885430535/AnsiballZ_file.py'
Jan 22 13:54:01 compute-2 sudo[216216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:02 compute-2 python3.9[216218]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:02 compute-2 sudo[216216]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:02.541+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:03 compute-2 ceph-mon[77081]: pgmap v769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:03.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:03.531+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:03 compute-2 sudo[216369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgoljbleakjeebqvpklaiofcjhohtok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090043.4455872-1862-163246982406915/AnsiballZ_command.py'
Jan 22 13:54:03 compute-2 sudo[216369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:03 compute-2 python3.9[216371]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:03 compute-2 sudo[216369]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:04 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:04.484+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:04 compute-2 python3.9[216524]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 13:54:05 compute-2 ceph-mon[77081]: pgmap v770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:05.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:05.488+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:05 compute-2 sudo[216674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqfrdtlibuylqznzqskelegpnkkuabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090045.4008138-1917-176722666666605/AnsiballZ_systemd_service.py'
Jan 22 13:54:05 compute-2 sudo[216674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:06 compute-2 python3.9[216676]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:54:06 compute-2 systemd[1]: Reloading.
Jan 22 13:54:06 compute-2 systemd-sysv-generator[216707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:54:06 compute-2 systemd-rc-local-generator[216703]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:54:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:06 compute-2 sudo[216674]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:06.475+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:07 compute-2 podman[216812]: 2026-01-22 13:54:07.047320353 +0000 UTC m=+0.099488807 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:54:07 compute-2 sudo[216886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxkpqktaybjtxpvfregbsgffigmieyaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090046.754958-1941-255973055409455/AnsiballZ_command.py'
Jan 22 13:54:07 compute-2 sudo[216886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:07 compute-2 python3.9[216890]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:07 compute-2 sudo[216886]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:07 compute-2 ceph-mon[77081]: pgmap v771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:07.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:07.480+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:07 compute-2 sudo[217042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktuvetznkitlhgsnntwptdoxghqzuoqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090047.4557505-1941-9131834410440/AnsiballZ_command.py'
Jan 22 13:54:07 compute-2 sudo[217042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:07 compute-2 python3.9[217044]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:07 compute-2 sudo[217042]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:08 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:08 compute-2 sudo[217197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrgmryjtjvdmtigdpusniljybggyaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090048.1087646-1941-151701150270644/AnsiballZ_command.py'
Jan 22 13:54:08 compute-2 sudo[217197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:08.505+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:08 compute-2 sshd-session[217090]: Invalid user user from 45.148.10.240 port 57550
Jan 22 13:54:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:08 compute-2 python3.9[217199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:08 compute-2 sudo[217197]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:08 compute-2 sshd-session[217090]: Connection closed by invalid user user 45.148.10.240 port 57550 [preauth]
Jan 22 13:54:09 compute-2 sudo[217351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqsivjjnqjdipwzitjlweorhjfapfxob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090048.7294712-1941-231733657496335/AnsiballZ_command.py'
Jan 22 13:54:09 compute-2 sudo[217351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:09 compute-2 python3.9[217353]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:09 compute-2 sudo[217351]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:09 compute-2 ceph-mon[77081]: pgmap v772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:09.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:09.493+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:09 compute-2 sudo[217504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnebxyahpkbwrmrxicvzngdbyqdcbwyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090049.3410206-1941-109130136517154/AnsiballZ_command.py'
Jan 22 13:54:09 compute-2 sudo[217504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:09 compute-2 python3.9[217506]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:09 compute-2 sudo[217504]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:10 compute-2 sudo[217657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfccvihckqqatwpnmheyxsixwgzvpjcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090049.9414728-1941-60444009184279/AnsiballZ_command.py'
Jan 22 13:54:10 compute-2 sudo[217657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:10 compute-2 python3.9[217659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:10 compute-2 sudo[217657]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:10 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:10.468+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:10 compute-2 sudo[217811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbpbhxbnfpxmucdayktyubjatjuqbov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090050.542531-1941-73869797496353/AnsiballZ_command.py'
Jan 22 13:54:10 compute-2 sudo[217811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:11 compute-2 python3.9[217813]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:11 compute-2 sudo[217811]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:11 compute-2 ceph-mon[77081]: pgmap v773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:11 compute-2 sudo[217964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikoyjftbwddxcwlgeyaerrpobjcskvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090051.1996925-1941-259355661089120/AnsiballZ_command.py'
Jan 22 13:54:11 compute-2 sudo[217964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:11.489+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:11 compute-2 python3.9[217966]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 13:54:11 compute-2 sudo[217964]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:12.468+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.328776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053328869, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 256, "total_data_size": 3218516, "memory_usage": 3275136, "flush_reason": "Manual Compaction"}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053342402, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2115001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18918, "largest_seqno": 20586, "table_properties": {"data_size": 2108467, "index_size": 3414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16221, "raw_average_key_size": 20, "raw_value_size": 2094087, "raw_average_value_size": 2620, "num_data_blocks": 150, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089938, "oldest_key_time": 1769089938, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 13678 microseconds, and 5808 cpu microseconds.
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342467) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2115001 bytes OK
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342485) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343797) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343813) EVENT_LOG_v1 {"time_micros": 1769090053343808, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3210617, prev total WAL file size 3210617, number of live WAL files 2.
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344962) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2065KB)], [36(7562KB)]
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053345053, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 9859060, "oldest_snapshot_seqno": -1}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5466 keys, 9664217 bytes, temperature: kUnknown
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053417199, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9664217, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9627902, "index_size": 21549, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141058, "raw_average_key_size": 25, "raw_value_size": 9528641, "raw_average_value_size": 1743, "num_data_blocks": 864, "num_entries": 5466, "num_filter_entries": 5466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.417596) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9664217 bytes
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.419118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.4 rd, 133.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 5993, records dropped: 527 output_compression: NoCompression
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.419134) EVENT_LOG_v1 {"time_micros": 1769090053419125, "job": 20, "event": "compaction_finished", "compaction_time_micros": 72306, "compaction_time_cpu_micros": 25884, "output_level": 6, "num_output_files": 1, "total_output_size": 9664217, "num_input_records": 5993, "num_output_records": 5466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053419570, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053420724, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:13.463+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:13 compute-2 ceph-mon[77081]: pgmap v774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:13 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:14.487+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:14 compute-2 sudo[218119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khjkzgwmowrbebxejrtwuakdcdsxwisp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090054.147014-2148-170164177743052/AnsiballZ_file.py'
Jan 22 13:54:14 compute-2 sudo[218119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:14 compute-2 python3.9[218121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:14 compute-2 sudo[218119]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:15 compute-2 sudo[218271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjghcbgeqghfkolgpeyeqzgzvrzynzgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090055.0255787-2148-73680287199252/AnsiballZ_file.py'
Jan 22 13:54:15 compute-2 sudo[218271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:15 compute-2 python3.9[218273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:15 compute-2 sudo[218271]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:15.509+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:15 compute-2 ceph-mon[77081]: pgmap v775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:15 compute-2 sudo[218423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjinzwxoodyvxrcxxrtwxdlcsgxwvwlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090055.6485252-2148-239224733810458/AnsiballZ_file.py'
Jan 22 13:54:16 compute-2 sudo[218423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:16 compute-2 python3.9[218425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:16 compute-2 sudo[218423]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:16.473+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:16.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:17 compute-2 sudo[218576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooaajskngakrlmqyqhvlymuczzwioyml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090056.6746633-2214-91787817429798/AnsiballZ_file.py'
Jan 22 13:54:17 compute-2 sudo[218576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:17 compute-2 python3.9[218578]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:17 compute-2 sudo[218576]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:17.513+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:17 compute-2 ceph-mon[77081]: pgmap v776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:17 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:18 compute-2 sudo[218728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-warzxtfvphfripqwqnhcxfuntlxuhmcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090057.4010115-2214-62145968902209/AnsiballZ_file.py'
Jan 22 13:54:18 compute-2 sudo[218728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:18 compute-2 python3.9[218730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:18 compute-2 sudo[218728]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:18.529+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:18 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:18 compute-2 sudo[218881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhijaelxyqzplaqbhjviksrjriyluzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090058.606046-2214-265437987073820/AnsiballZ_file.py'
Jan 22 13:54:18 compute-2 sudo[218881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:19 compute-2 python3.9[218883]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:19 compute-2 sudo[218881]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:19.531+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:19 compute-2 sudo[219033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opffrierjoxbhtyualgnubzsalpohbfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090059.3253076-2214-217289394747225/AnsiballZ_file.py'
Jan 22 13:54:19 compute-2 sudo[219033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:19 compute-2 ceph-mon[77081]: pgmap v777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:19 compute-2 python3.9[219035]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:19 compute-2 sudo[219033]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:19 compute-2 sudo[219040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:19 compute-2 sudo[219040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:19 compute-2 sudo[219040]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:19 compute-2 sudo[219086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:19 compute-2 sudo[219086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:19 compute-2 sudo[219086]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:20 compute-2 sudo[219235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcezqvabicodbrkrkotwokjvyynlsprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090059.9416332-2214-185047285813844/AnsiballZ_file.py'
Jan 22 13:54:20 compute-2 sudo[219235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:20 compute-2 python3.9[219237]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:20 compute-2 sudo[219235]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:20.516+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:20 compute-2 sudo[219388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflnfkcooidqeukjanrhnljwekbrhnvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090060.5399237-2214-136475635229750/AnsiballZ_file.py'
Jan 22 13:54:20 compute-2 sudo[219388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:21 compute-2 python3.9[219390]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:21 compute-2 sudo[219388]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:21.497+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:21 compute-2 sudo[219540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvmgtrvngdsbwgwoivprmxkwpwocujbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090061.2328742-2214-96956741266652/AnsiballZ_file.py'
Jan 22 13:54:21 compute-2 sudo[219540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:21 compute-2 ceph-mon[77081]: pgmap v778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:21 compute-2 python3.9[219542]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:21 compute-2 sudo[219540]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:21 compute-2 podman[219567]: 2026-01-22 13:54:21.984905054 +0000 UTC m=+0.050143880 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 13:54:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:22.538+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:22.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:22 compute-2 ceph-mon[77081]: pgmap v779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:23.492+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:23 compute-2 sudo[219588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:23 compute-2 sudo[219588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:23 compute-2 sudo[219588]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:23 compute-2 sudo[219613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:54:23 compute-2 sudo[219613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:23 compute-2 sudo[219613]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:23 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:23 compute-2 sudo[219638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:23 compute-2 sudo[219638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:23 compute-2 sudo[219638]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:23 compute-2 sudo[219663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:54:23 compute-2 sudo[219663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:24 compute-2 sudo[219663]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:24.451+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:24.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:24 compute-2 ceph-mon[77081]: pgmap v780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:25.474+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:54:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:54:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:54:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:54:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:54:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:26.466+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:26 compute-2 ceph-mon[77081]: pgmap v781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:27.454+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:27.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:27 compute-2 sudo[219846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxevyhjwshkaubadqnxeyfcjkycfivj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090067.0983198-2539-251055909381261/AnsiballZ_getent.py'
Jan 22 13:54:27 compute-2 sudo[219846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:27 compute-2 python3.9[219848]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 22 13:54:27 compute-2 sudo[219846]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:28.430+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:28 compute-2 sudo[220000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcrxqhdeddgeqfbolcuusuymzzesvlmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090068.104618-2562-108853224313525/AnsiballZ_group.py'
Jan 22 13:54:28 compute-2 sudo[220000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:28 compute-2 python3.9[220002]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 13:54:28 compute-2 groupadd[220003]: group added to /etc/group: name=nova, GID=42436
Jan 22 13:54:28 compute-2 groupadd[220003]: group added to /etc/gshadow: name=nova
Jan 22 13:54:28 compute-2 groupadd[220003]: new group: name=nova, GID=42436
Jan 22 13:54:28 compute-2 sudo[220000]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:28 compute-2 ceph-mon[77081]: pgmap v782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:28 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:29.447+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:29.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:29 compute-2 sudo[220158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbfdugqbhlmgbsrchyfrkqtvgrzsdxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090069.4337738-2587-57169192256591/AnsiballZ_user.py'
Jan 22 13:54:29 compute-2 sudo[220158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:30 compute-2 python3.9[220160]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 13:54:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:30.397+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:30 compute-2 useradd[220162]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 22 13:54:30 compute-2 useradd[220162]: add 'nova' to group 'libvirt'
Jan 22 13:54:30 compute-2 useradd[220162]: add 'nova' to shadow group 'libvirt'
Jan 22 13:54:30 compute-2 sudo[220158]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:30.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:30 compute-2 ceph-mon[77081]: pgmap v783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:31.373+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:31 compute-2 sshd-session[220194]: Accepted publickey for zuul from 192.168.122.30 port 45286 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 13:54:31 compute-2 systemd-logind[787]: New session 50 of user zuul.
Jan 22 13:54:31 compute-2 systemd[1]: Started Session 50 of User zuul.
Jan 22 13:54:31 compute-2 sshd-session[220194]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 13:54:31 compute-2 sshd-session[220197]: Received disconnect from 192.168.122.30 port 45286:11: disconnected by user
Jan 22 13:54:31 compute-2 sshd-session[220197]: Disconnected from user zuul 192.168.122.30 port 45286
Jan 22 13:54:31 compute-2 sshd-session[220194]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:54:31 compute-2 systemd[1]: session-50.scope: Deactivated successfully.
Jan 22 13:54:31 compute-2 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Jan 22 13:54:31 compute-2 systemd-logind[787]: Removed session 50.
Jan 22 13:54:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:32.333+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:32 compute-2 python3.9[220347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000055s ======
Jan 22 13:54:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:32.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.925268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072925560, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 548, "num_deletes": 251, "total_data_size": 649547, "memory_usage": 660696, "flush_reason": "Manual Compaction"}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072933171, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20591, "largest_seqno": 21134, "table_properties": {"data_size": 413181, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7315, "raw_average_key_size": 19, "raw_value_size": 407344, "raw_average_value_size": 1092, "num_data_blocks": 33, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090053, "oldest_key_time": 1769090053, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8175 microseconds, and 2052 cpu microseconds.
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.933448) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 415944 bytes OK
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.933560) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.935276) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.935295) EVENT_LOG_v1 {"time_micros": 1769090072935289, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.935328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 646312, prev total WAL file size 646312, number of live WAL files 2.
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.936518) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(406KB)], [39(9437KB)]
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072936568, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 10080161, "oldest_snapshot_seqno": -1}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5324 keys, 8372170 bytes, temperature: kUnknown
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072994113, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8372170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8337808, "index_size": 19980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 138911, "raw_average_key_size": 26, "raw_value_size": 8241822, "raw_average_value_size": 1548, "num_data_blocks": 796, "num_entries": 5324, "num_filter_entries": 5324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.994338) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8372170 bytes
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.995984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.0 rd, 145.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(44.4) write-amplify(20.1) OK, records in: 5839, records dropped: 515 output_compression: NoCompression
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.996004) EVENT_LOG_v1 {"time_micros": 1769090072995996, "job": 22, "event": "compaction_finished", "compaction_time_micros": 57603, "compaction_time_cpu_micros": 19582, "output_level": 6, "num_output_files": 1, "total_output_size": 8372170, "num_input_records": 5839, "num_output_records": 5324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072996201, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 22 13:54:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072997825, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.936422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:54:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:33 compute-2 ceph-mon[77081]: pgmap v784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:54:33 compute-2 python3.9[220469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090072.136436-2661-70723950249857/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:33 compute-2 sudo[220494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:33.300+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:33 compute-2 sudo[220494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:33 compute-2 sudo[220494]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:33 compute-2 sudo[220547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:54:33 compute-2 sudo[220547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:33 compute-2 sudo[220547]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:33.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:33 compute-2 python3.9[220669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:34 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:34 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:34 compute-2 python3.9[220745]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:34.251+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:34.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:34 compute-2 python3.9[220896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:35 compute-2 ceph-mon[77081]: pgmap v785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:35.243+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:35 compute-2 python3.9[221017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090074.33942-2661-193013022464288/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:35.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:36 compute-2 python3.9[221167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:36.278+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:36 compute-2 python3.9[221288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090075.4237041-2661-169273789506235/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:36.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:37 compute-2 ceph-mon[77081]: pgmap v786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:37 compute-2 python3.9[221439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:37.322+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:37.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:37 compute-2 podman[221534]: 2026-01-22 13:54:37.55688509 +0000 UTC m=+0.077526195 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 13:54:37 compute-2 python3.9[221573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090076.6936722-2661-186240561726968/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:38.284+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:38 compute-2 python3.9[221736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:38 compute-2 python3.9[221858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090077.8564768-2661-166398153096849/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:39 compute-2 ceph-mon[77081]: pgmap v787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:39.255+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:40 compute-2 sudo[221883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:40 compute-2 sudo[221883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:40 compute-2 sudo[221883]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:40 compute-2 sudo[221908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:54:40 compute-2 sudo[221908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:54:40 compute-2 sudo[221908]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:40.300+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:40 compute-2 sudo[222059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-begspmiwwkqusmxvlphdwhliplkcsaag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090080.3239276-2911-11897036736355/AnsiballZ_file.py'
Jan 22 13:54:40 compute-2 sudo[222059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:40 compute-2 python3.9[222061]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:40 compute-2 sudo[222059]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:41 compute-2 ceph-mon[77081]: pgmap v788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:41 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:41.319+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:41 compute-2 sudo[222211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrgviljgukudoeujsipunldbfzrqbpum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090081.1262474-2935-221464056972129/AnsiballZ_copy.py'
Jan 22 13:54:41 compute-2 sudo[222211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:41.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:41 compute-2 python3.9[222213]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:54:41 compute-2 sudo[222211]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:42 compute-2 sudo[222363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buxrbcbhleterauwsadhkwowdrkonmke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090081.8802733-2959-136023417020099/AnsiballZ_stat.py'
Jan 22 13:54:42 compute-2 sudo[222363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:42.317+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:42 compute-2 python3.9[222365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:54:42 compute-2 sudo[222363]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:43 compute-2 sudo[222516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolvrbhddxdxurwfuxomgadzeiaprgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090082.686153-2985-69131524682927/AnsiballZ_stat.py'
Jan 22 13:54:43 compute-2 sudo[222516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:43 compute-2 python3.9[222518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:43 compute-2 ceph-mon[77081]: pgmap v789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:43 compute-2 sudo[222516]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:43.295+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:43 compute-2 sudo[222639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoqiekgbvtkqcrnncnicsvvlmnsxbrre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090082.686153-2985-69131524682927/AnsiballZ_copy.py'
Jan 22 13:54:43 compute-2 sudo[222639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:43 compute-2 python3.9[222641]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769090082.686153-2985-69131524682927/.source _original_basename=.yenrcdsu follow=False checksum=d73f8e53f15f2892abac02b728024fce172554d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 22 13:54:43 compute-2 sudo[222639]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:44.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:44 compute-2 python3.9[222794]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:54:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:45 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:45.297+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:45.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:45 compute-2 python3.9[222946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:46 compute-2 ceph-mon[77081]: pgmap v790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:46.248+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:46 compute-2 python3.9[223067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090085.2893736-3062-176971979301543/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:46.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:54:47.160 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:54:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:54:47.161 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:54:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:54:47.161 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:54:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:47.199+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:47 compute-2 python3.9[223218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 13:54:47 compute-2 ceph-mon[77081]: pgmap v791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:47 compute-2 python3.9[223339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090086.5943604-3106-129330245165084/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 13:54:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:48.153+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:48 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:48 compute-2 sudo[223490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidquiwaayycnzpntsytapvjtqjeuacx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090088.3471768-3158-79117197816986/AnsiballZ_container_config_data.py'
Jan 22 13:54:48 compute-2 sudo[223490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:48 compute-2 python3.9[223492]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 22 13:54:49 compute-2 sudo[223490]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:49.132+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:49 compute-2 ceph-mon[77081]: pgmap v792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:49 compute-2 sudo[223642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfsewfeiksiigcxblxkpqlwzimcjvkva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090089.50017-3190-123459207599480/AnsiballZ_container_config_hash.py'
Jan 22 13:54:49 compute-2 sudo[223642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:50 compute-2 python3.9[223644]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:54:50 compute-2 sudo[223642]: pam_unix(sudo:session): session closed for user root
Jan 22 13:54:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:50.171+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:50.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:51 compute-2 sudo[223795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvjigbptyqjqmvwlzmvqiapjssarninc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769090090.6090736-3220-243123817709539/AnsiballZ_edpm_container_manage.py'
Jan 22 13:54:51 compute-2 sudo[223795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:54:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:51.145+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:51 compute-2 python3[223797]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:54:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:51 compute-2 ceph-mon[77081]: pgmap v793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:52.159+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:52 compute-2 ceph-mon[77081]: pgmap v794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:53 compute-2 podman[223833]: 2026-01-22 13:54:53.030927675 +0000 UTC m=+0.086708115 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 13:54:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:53.143+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:54 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:54:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:54.140+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:54:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:54.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:54:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:55.130+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:55 compute-2 ceph-mon[77081]: pgmap v795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:54:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:56.144+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:56.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:57.122+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:54:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:58.170+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:58.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:54:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:59.201+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:54:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:54:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:54:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:54:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:00.191+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:00 compute-2 sudo[223887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:00 compute-2 sudo[223887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:00 compute-2 sudo[223887]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:00 compute-2 sudo[223912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:00 compute-2 sudo[223912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:00 compute-2 sudo[223912]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:55:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:55:00 compute-2 podman[223810]: 2026-01-22 13:55:00.955728188 +0000 UTC m=+9.534893691 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 13:55:01 compute-2 podman[223959]: 2026-01-22 13:55:01.119549434 +0000 UTC m=+0.049928308 container create 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 13:55:01 compute-2 podman[223959]: 2026-01-22 13:55:01.089882082 +0000 UTC m=+0.020260976 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 13:55:01 compute-2 python3[223797]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 22 13:55:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:01.202+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:01 compute-2 sudo[223795]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:01 compute-2 ceph-mon[77081]: pgmap v796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:02.190+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:02.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:03.177+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-2 ceph-mon[77081]: pgmap v797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:03 compute-2 ceph-mon[77081]: pgmap v798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:04.180+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-2 sudo[224148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyaghnzsusrcbaqofmvxsehgsclomob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090104.3667781-3244-115975533022024/AnsiballZ_stat.py'
Jan 22 13:55:04 compute-2 sudo[224148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-2 ceph-mon[77081]: pgmap v799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-2 ceph-mon[77081]: pgmap v800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:04 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:04.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:04 compute-2 python3.9[224150]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:04 compute-2 sudo[224148]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:05.180+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:06 compute-2 sudo[224302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqgbnvzukojizbsyowbqrgdkgnihsaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090105.725744-3280-43815366890626/AnsiballZ_container_config_data.py'
Jan 22 13:55:06 compute-2 sudo[224302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:06.154+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:06 compute-2 python3.9[224304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 22 13:55:06 compute-2 sudo[224302]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:06.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:06 compute-2 ceph-mon[77081]: pgmap v801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:07 compute-2 sudo[224455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajtxtdjctpibbvggczwaormvlqcsuolr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090106.7549582-3313-151272397892274/AnsiballZ_container_config_hash.py'
Jan 22 13:55:07 compute-2 sudo[224455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:07.192+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:07 compute-2 python3.9[224457]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 13:55:07 compute-2 sudo[224455]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:08 compute-2 podman[224538]: 2026-01-22 13:55:08.026277394 +0000 UTC m=+0.086773798 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:55:08 compute-2 sudo[224634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmrwyfwqvhoniqboeonziyutqcoczid ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769090107.8348732-3342-16392671574225/AnsiballZ_edpm_container_manage.py'
Jan 22 13:55:08 compute-2 sudo[224634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:08.159+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:08 compute-2 python3[224636]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 13:55:08 compute-2 podman[224674]: 2026-01-22 13:55:08.589136008 +0000 UTC m=+0.060378474 container create 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:55:08 compute-2 podman[224674]: 2026-01-22 13:55:08.558432607 +0000 UTC m=+0.029675083 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 13:55:08 compute-2 python3[224636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 22 13:55:08 compute-2 sudo[224634]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:08.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:09.174+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:09 compute-2 sudo[224862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhikwcwuptdbackfrtcwbmmywuxumtaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090109.2926412-3367-21188083706768/AnsiballZ_stat.py'
Jan 22 13:55:09 compute-2 sudo[224862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:09 compute-2 ceph-mon[77081]: pgmap v802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:09 compute-2 python3.9[224864]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:09 compute-2 sudo[224862]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:10.189+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:10 compute-2 sudo[225017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myyjacnvnhdwldipduwpkmlhokafybjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.2569025-3394-25819600869215/AnsiballZ_file.py'
Jan 22 13:55:10 compute-2 sudo[225017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:10.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:10 compute-2 python3.9[225019]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:55:10 compute-2 sudo[225017]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:11.236+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:11 compute-2 sudo[225168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltrahevzzzsalsznugdfgmbaoibsieis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.9074886-3394-195255922898696/AnsiballZ_copy.py'
Jan 22 13:55:11 compute-2 sudo[225168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:11 compute-2 ceph-mon[77081]: pgmap v803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:11 compute-2 python3.9[225170]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769090110.9074886-3394-195255922898696/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 13:55:11 compute-2 sudo[225168]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:11 compute-2 sudo[225244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpvysstlisiciitoamtjyagvbxcqvahr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.9074886-3394-195255922898696/AnsiballZ_systemd.py'
Jan 22 13:55:11 compute-2 sudo[225244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:12.209+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:12 compute-2 python3.9[225246]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 13:55:12 compute-2 systemd[1]: Reloading.
Jan 22 13:55:12 compute-2 systemd-rc-local-generator[225263]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:55:12 compute-2 systemd-sysv-generator[225267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:55:12 compute-2 sudo[225244]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:12 compute-2 sudo[225356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eintbicgopwvketpzklzijiuosscpnye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090110.9074886-3394-195255922898696/AnsiballZ_systemd.py'
Jan 22 13:55:12 compute-2 sudo[225356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:13 compute-2 ceph-mon[77081]: pgmap v804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:13.174+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:13 compute-2 python3.9[225358]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 13:55:13 compute-2 systemd[1]: Reloading.
Jan 22 13:55:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:13.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:13 compute-2 systemd-rc-local-generator[225387]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 13:55:13 compute-2 systemd-sysv-generator[225392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 13:55:13 compute-2 systemd[1]: Starting nova_compute container...
Jan 22 13:55:13 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:55:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:13 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:14 compute-2 podman[225398]: 2026-01-22 13:55:14.012823101 +0000 UTC m=+0.160297581 container init 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Jan 22 13:55:14 compute-2 podman[225398]: 2026-01-22 13:55:14.025589381 +0000 UTC m=+0.173063881 container start 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible)
Jan 22 13:55:14 compute-2 podman[225398]: nova_compute
Jan 22 13:55:14 compute-2 nova_compute[225413]: + sudo -E kolla_set_configs
Jan 22 13:55:14 compute-2 systemd[1]: Started nova_compute container.
Jan 22 13:55:14 compute-2 sudo[225356]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Validating config file
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying service configuration files
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Deleting /etc/ceph
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Creating directory /etc/ceph
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Writing out command to execute
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:14 compute-2 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:14 compute-2 nova_compute[225413]: ++ cat /run_command
Jan 22 13:55:14 compute-2 nova_compute[225413]: + CMD=nova-compute
Jan 22 13:55:14 compute-2 nova_compute[225413]: + ARGS=
Jan 22 13:55:14 compute-2 nova_compute[225413]: + sudo kolla_copy_cacerts
Jan 22 13:55:14 compute-2 nova_compute[225413]: + [[ ! -n '' ]]
Jan 22 13:55:14 compute-2 nova_compute[225413]: + . kolla_extend_start
Jan 22 13:55:14 compute-2 nova_compute[225413]: Running command: 'nova-compute'
Jan 22 13:55:14 compute-2 nova_compute[225413]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 13:55:14 compute-2 nova_compute[225413]: + umask 0022
Jan 22 13:55:14 compute-2 nova_compute[225413]: + exec nova-compute
Jan 22 13:55:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:14 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:14.192+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:15.143+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:15 compute-2 ceph-mon[77081]: pgmap v805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:15 compute-2 python3.9[225576]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:15.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:16.094+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.258 225417 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.258 225417 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.259 225417 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.259 225417 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 22 13:55:16 compute-2 python3.9[225728]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.408 225417 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.439 225417 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.440 225417 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 22 13:55:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:16 compute-2 nova_compute[225413]: 2026-01-22 13:55:16.985 225417 INFO nova.virt.driver [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 22 13:55:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:17.087+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.113 225417 INFO nova.compute.provider_config [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 22 13:55:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.134 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.135 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.135 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.135 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.179 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.179 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.179 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 WARNING oslo_config.cfg [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 13:55:17 compute-2 nova_compute[225413]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 13:55:17 compute-2 nova_compute[225413]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 13:55:17 compute-2 nova_compute[225413]: and ``live_migration_inbound_addr`` respectively.
Jan 22 13:55:17 compute-2 nova_compute[225413]: ).  Its value may be silently ignored in the future.
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 ceph-mon[77081]: pgmap v806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:17 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 python3.9[225881]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.296 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.296 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.296 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.316 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.316 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.317 225417 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.330 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.330 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.331 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.331 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 22 13:55:17 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 13:55:17 compute-2 systemd[1]: Started libvirt QEMU daemon.
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.405 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb57bf492b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.407 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb57bf492b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.408 225417 INFO nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Connection event '1' reason 'None'
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.435 225417 WARNING nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Jan 22 13:55:17 compute-2 nova_compute[225413]: 2026-01-22 13:55:17.436 225417 DEBUG nova.virt.libvirt.volume.mount [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 22 13:55:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:17.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:18.094+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:18 compute-2 sudo[226091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxlxraahhughykwzrboztxreyunvrqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090117.746789-3574-76858875700521/AnsiballZ_podman_container.py'
Jan 22 13:55:18 compute-2 sudo[226091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.319 225417 INFO nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]: 
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <host>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <uuid>5492a354-d192-4c48-8602-99be1884b049</uuid>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <arch>x86_64</arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <microcode version='16777317'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <signature family='23' model='49' stepping='0'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='x2apic'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='tsc-deadline'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='osxsave'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='hypervisor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='tsc_adjust'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='spec-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='stibp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='arch-capabilities'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='cmp_legacy'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='topoext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='virt-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='lbrv'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='tsc-scale'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='vmcb-clean'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='pause-filter'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='pfthreshold'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='svme-addr-chk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='rdctl-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='skip-l1dfl-vmentry'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='mds-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature name='pschange-mc-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <pages unit='KiB' size='4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <pages unit='KiB' size='2048'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <pages unit='KiB' size='1048576'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <power_management>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <suspend_mem/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </power_management>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <iommu support='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <migration_features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <live/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <uri_transports>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <uri_transport>tcp</uri_transport>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <uri_transport>rdma</uri_transport>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </uri_transports>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </migration_features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <topology>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <cells num='1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <cell id='0'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           <memory unit='KiB'>7864312</memory>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           <pages unit='KiB' size='2048'>0</pages>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           <distances>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <sibling id='0' value='10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           </distances>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           <cpus num='8'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:           </cpus>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         </cell>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </cells>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </topology>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <cache>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </cache>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <secmodel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model>selinux</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <doi>0</doi>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </secmodel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <secmodel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model>dac</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <doi>0</doi>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </secmodel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </host>
Jan 22 13:55:18 compute-2 nova_compute[225413]: 
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <guest>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <os_type>hvm</os_type>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <arch name='i686'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <wordsize>32</wordsize>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <domain type='qemu'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <domain type='kvm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <pae/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <nonpae/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <apic default='on' toggle='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <cpuselection/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <deviceboot/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <externalSnapshot/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </guest>
Jan 22 13:55:18 compute-2 nova_compute[225413]: 
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <guest>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <os_type>hvm</os_type>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <arch name='x86_64'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <wordsize>64</wordsize>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <domain type='qemu'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <domain type='kvm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <apic default='on' toggle='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <cpuselection/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <deviceboot/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <externalSnapshot/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </guest>
Jan 22 13:55:18 compute-2 nova_compute[225413]: 
Jan 22 13:55:18 compute-2 nova_compute[225413]: </capabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]: 
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.326 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.341 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 13:55:18 compute-2 nova_compute[225413]: <domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <arch>i686</arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <vcpu max='240'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <os supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='firmware'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <loader supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>rom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pflash</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='readonly'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>yes</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='secure'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </loader>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </os>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>anonymous</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>memfd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </memoryBacking>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <disk supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>disk</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cdrom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>floppy</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>lun</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ide</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>fdc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>sata</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </disk>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vnc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </graphics>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <video supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='modelType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vga</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cirrus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>none</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>bochs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ramfb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </video>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='mode'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>subsystem</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>mandatory</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>requisite</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>optional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pci</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hostdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <rng supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>random</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </rng>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='driverType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>path</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>handle</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </filesystem>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emulator</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>external</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>2.0</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </tpm>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </redirdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <channel supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </channel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </crypto>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <interface supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>passt</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </interface>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <panic supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>isa</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>hyperv</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </panic>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <console supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>null</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dev</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pipe</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stdio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>udp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tcp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </console>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <gic supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sev supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='features'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>relaxed</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vapic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vpindex</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>runtime</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>synic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stimer</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reset</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>frequencies</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ipi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>avic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hyperv>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </features>
Jan 22 13:55:18 compute-2 nova_compute[225413]: </domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.350 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 13:55:18 compute-2 nova_compute[225413]: <domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <arch>i686</arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <vcpu max='4096'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <os supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='firmware'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <loader supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>rom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pflash</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='readonly'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>yes</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='secure'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </loader>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </os>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>anonymous</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>memfd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </memoryBacking>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <disk supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>disk</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cdrom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>floppy</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>lun</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>fdc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>sata</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </disk>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vnc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </graphics>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <video supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='modelType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vga</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cirrus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>none</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>bochs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ramfb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </video>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='mode'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>subsystem</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>mandatory</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>requisite</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>optional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pci</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hostdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <rng supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>random</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </rng>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='driverType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>path</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>handle</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </filesystem>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emulator</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>external</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>2.0</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </tpm>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </redirdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <channel supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </channel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </crypto>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <interface supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>passt</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </interface>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <panic supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>isa</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>hyperv</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </panic>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <console supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>null</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dev</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pipe</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stdio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>udp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tcp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </console>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <gic supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sev supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='features'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>relaxed</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vapic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vpindex</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>runtime</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>synic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stimer</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reset</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>frequencies</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ipi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>avic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hyperv>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </features>
Jan 22 13:55:18 compute-2 nova_compute[225413]: </domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.405 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.410 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 13:55:18 compute-2 nova_compute[225413]: <domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <arch>x86_64</arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <vcpu max='240'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <os supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='firmware'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <loader supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>rom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pflash</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='readonly'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>yes</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='secure'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </loader>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </os>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 python3.9[226093]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>anonymous</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>memfd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </memoryBacking>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <disk supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>disk</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cdrom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>floppy</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>lun</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ide</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>fdc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>sata</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </disk>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vnc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </graphics>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <video supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='modelType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vga</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cirrus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>none</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>bochs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ramfb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </video>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='mode'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>subsystem</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>mandatory</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>requisite</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>optional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pci</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hostdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <rng supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>random</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </rng>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='driverType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>path</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>handle</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </filesystem>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emulator</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>external</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>2.0</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </tpm>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </redirdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <channel supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </channel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </crypto>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <interface supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>passt</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </interface>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <panic supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>isa</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>hyperv</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </panic>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <console supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>null</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dev</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pipe</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stdio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>udp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tcp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </console>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <gic supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sev supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='features'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>relaxed</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vapic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vpindex</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>runtime</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>synic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stimer</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reset</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>frequencies</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ipi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>avic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hyperv>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </features>
Jan 22 13:55:18 compute-2 nova_compute[225413]: </domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.489 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 13:55:18 compute-2 nova_compute[225413]: <domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <domain>kvm</domain>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <arch>x86_64</arch>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <vcpu max='4096'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <iothreads supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <os supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='firmware'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>efi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <loader supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>rom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pflash</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='readonly'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>yes</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='secure'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>yes</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>no</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </loader>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </os>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='maximumMigratable'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>on</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>off</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <vendor>AMD</vendor>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='succor'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <mode name='custom' supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ddpd-u'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sha512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm3'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sm4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Denverton-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amd-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='auto-ibrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='perfmon-v2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbpb'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='stibp-always-on'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='EPYC-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-128'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-256'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx10-512'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='prefetchiti'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Haswell-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512er'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512pf'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fma4'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tbm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xop'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='amx-tile'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-bf16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-fp16'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bitalg'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrc'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fzrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='la57'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='taa-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ifma'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cmpccxadd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fbsdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='fsrs'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ibrs-all'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='intel-psfd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='lam'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mcdt-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pbrsb-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='psdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='serialize'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vaes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='hle'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 sudo[226091]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='rtm'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512bw'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512cd'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512dq'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512f'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='avx512vl'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='invpcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pcid'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='pku'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='mpx'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='core-capability'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='split-lock-detect'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='cldemote'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='erms'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='gfni'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdir64b'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='movdiri'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='xsaves'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='athlon-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='core2duo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='coreduo-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='n270-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='ss'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <blockers model='phenom-v1'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnow'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <feature name='3dnowext'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </blockers>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </mode>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <memoryBacking supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <enum name='sourceType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>anonymous</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <value>memfd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </memoryBacking>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <disk supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='diskDevice'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>disk</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cdrom</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>floppy</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>lun</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>fdc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>sata</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </disk>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <graphics supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vnc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egl-headless</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </graphics>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <video supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='modelType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vga</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>cirrus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>none</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>bochs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ramfb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </video>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hostdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='mode'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>subsystem</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='startupPolicy'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>mandatory</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>requisite</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>optional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='subsysType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pci</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>scsi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='capsType'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='pciBackend'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hostdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <rng supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtio-non-transitional</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>random</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>egd</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </rng>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <filesystem supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='driverType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>path</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>handle</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>virtiofs</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </filesystem>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tpm supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-tis</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tpm-crb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emulator</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>external</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendVersion'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>2.0</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </tpm>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <redirdev supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='bus'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>usb</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </redirdev>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <channel supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </channel>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <crypto supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendModel'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>builtin</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </crypto>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <interface supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='backendType'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>default</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>passt</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </interface>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <panic supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='model'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>isa</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>hyperv</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </panic>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <console supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='type'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>null</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vc</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pty</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dev</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>file</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>pipe</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stdio</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>udp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tcp</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>unix</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>qemu-vdagent</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>dbus</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </console>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </devices>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <features>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <gic supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <genid supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <backup supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <async-teardown supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <s390-pv supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <ps2 supported='yes'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <tdx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sev supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <sgx supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <hyperv supported='yes'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <enum name='features'>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>relaxed</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vapic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>spinlocks</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vpindex</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>runtime</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>synic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>stimer</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reset</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>vendor_id</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>frequencies</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>reenlightenment</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>tlbflush</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>ipi</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>avic</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>emsr_bitmap</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <value>xmm_input</value>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </enum>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       <defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:18 compute-2 nova_compute[225413]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:18 compute-2 nova_compute[225413]:       </defaults>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     </hyperv>
Jan 22 13:55:18 compute-2 nova_compute[225413]:     <launchSecurity supported='no'/>
Jan 22 13:55:18 compute-2 nova_compute[225413]:   </features>
Jan 22 13:55:18 compute-2 nova_compute[225413]: </domainCapabilities>
Jan 22 13:55:18 compute-2 nova_compute[225413]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.570 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.570 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.570 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.578 225417 INFO nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Secure Boot support detected
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.580 225417 INFO nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.580 225417 INFO nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.590 225417 DEBUG nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 13:55:18 compute-2 nova_compute[225413]:   <model>Nehalem</model>
Jan 22 13:55:18 compute-2 nova_compute[225413]: </cpu>
Jan 22 13:55:18 compute-2 nova_compute[225413]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.592 225417 DEBUG nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.663 225417 INFO nova.virt.node [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Determined node identity d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from /var/lib/nova/compute_id
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.684 225417 WARNING nova.compute.manager [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Compute nodes ['d4dcb68c-0009-4467-a6f7-0e9fe0236fbc'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.729 225417 INFO nova.compute.manager [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 22 13:55:18 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.773 225417 WARNING nova.compute.manager [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:55:18 compute-2 nova_compute[225413]: 2026-01-22 13:55:18.775 225417 DEBUG oslo_concurrency.processutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:19.133+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:55:19 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/729966866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:19 compute-2 nova_compute[225413]: 2026-01-22 13:55:19.184 225417 DEBUG oslo_concurrency.processutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:19 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 13:55:19 compute-2 systemd[1]: Started libvirt nodedev daemon.
Jan 22 13:55:19 compute-2 sudo[226315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjczllywnazdulbbcbpygsoqgeygoixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090119.092948-3599-2163913330788/AnsiballZ_systemd.py'
Jan 22 13:55:19 compute-2 sudo[226315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:19.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:19 compute-2 nova_compute[225413]: 2026-01-22 13:55:19.697 225417 WARNING nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:55:19 compute-2 nova_compute[225413]: 2026-01-22 13:55:19.698 225417 DEBUG nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5277MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:55:19 compute-2 nova_compute[225413]: 2026-01-22 13:55:19.698 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:19 compute-2 nova_compute[225413]: 2026-01-22 13:55:19.698 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:19 compute-2 python3.9[226318]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 13:55:19 compute-2 systemd[1]: Stopping nova_compute container...
Jan 22 13:55:19 compute-2 nova_compute[225413]: 2026-01-22 13:55:19.875 225417 WARNING nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] No compute node record for compute-2.ctlplane.example.com:d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d4dcb68c-0009-4467-a6f7-0e9fe0236fbc could not be found.
Jan 22 13:55:20 compute-2 nova_compute[225413]: 2026-01-22 13:55:20.083 225417 INFO nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc
Jan 22 13:55:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:20.105+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:20 compute-2 sudo[226336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:20 compute-2 sudo[226336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:20 compute-2 sudo[226336]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:20 compute-2 sudo[226361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:20 compute-2 sudo[226361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:20 compute-2 sudo[226361]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:20 compute-2 nova_compute[225413]: 2026-01-22 13:55:20.472 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:20 compute-2 nova_compute[225413]: 2026-01-22 13:55:20.472 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:55:20 compute-2 nova_compute[225413]: 2026-01-22 13:55:20.473 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:55:20 compute-2 nova_compute[225413]: 2026-01-22 13:55:20.473 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:55:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:20.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:20 compute-2 virtqemud[225907]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 13:55:20 compute-2 virtqemud[225907]: hostname: compute-2
Jan 22 13:55:20 compute-2 virtqemud[225907]: End of file while reading data: Input/output error
Jan 22 13:55:20 compute-2 systemd[1]: libpod-572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649.scope: Deactivated successfully.
Jan 22 13:55:20 compute-2 systemd[1]: libpod-572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649.scope: Consumed 3.619s CPU time.
Jan 22 13:55:20 compute-2 podman[226323]: 2026-01-22 13:55:20.869987571 +0000 UTC m=+1.000893923 container died 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Jan 22 13:55:20 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649-userdata-shm.mount: Deactivated successfully.
Jan 22 13:55:20 compute-2 systemd[1]: var-lib-containers-storage-overlay-c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b-merged.mount: Deactivated successfully.
Jan 22 13:55:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:21.086+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:21 compute-2 ceph-mon[77081]: pgmap v807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:21 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:21 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/729966866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:21 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/532836915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:22.040+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:22 compute-2 podman[226323]: 2026-01-22 13:55:22.204271854 +0000 UTC m=+2.335178206 container cleanup 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 22 13:55:22 compute-2 podman[226323]: nova_compute
Jan 22 13:55:22 compute-2 podman[226404]: nova_compute
Jan 22 13:55:22 compute-2 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 22 13:55:22 compute-2 systemd[1]: Stopped nova_compute container.
Jan 22 13:55:22 compute-2 systemd[1]: Starting nova_compute container...
Jan 22 13:55:22 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:55:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:22 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:22 compute-2 podman[226417]: 2026-01-22 13:55:22.379101702 +0000 UTC m=+0.084987919 container init 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 13:55:22 compute-2 podman[226417]: 2026-01-22 13:55:22.386094603 +0000 UTC m=+0.091980820 container start 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 22 13:55:22 compute-2 podman[226417]: nova_compute
Jan 22 13:55:22 compute-2 nova_compute[226433]: + sudo -E kolla_set_configs
Jan 22 13:55:22 compute-2 systemd[1]: Started nova_compute container.
Jan 22 13:55:22 compute-2 sudo[226315]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Validating config file
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying service configuration files
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /etc/ceph
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Creating directory /etc/ceph
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Writing out command to execute
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:22 compute-2 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 13:55:22 compute-2 nova_compute[226433]: ++ cat /run_command
Jan 22 13:55:22 compute-2 nova_compute[226433]: + CMD=nova-compute
Jan 22 13:55:22 compute-2 nova_compute[226433]: + ARGS=
Jan 22 13:55:22 compute-2 nova_compute[226433]: + sudo kolla_copy_cacerts
Jan 22 13:55:22 compute-2 nova_compute[226433]: + [[ ! -n '' ]]
Jan 22 13:55:22 compute-2 nova_compute[226433]: + . kolla_extend_start
Jan 22 13:55:22 compute-2 nova_compute[226433]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 13:55:22 compute-2 nova_compute[226433]: Running command: 'nova-compute'
Jan 22 13:55:22 compute-2 nova_compute[226433]: + umask 0022
Jan 22 13:55:22 compute-2 nova_compute[226433]: + exec nova-compute
Jan 22 13:55:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:23.008+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:23 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3079402314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:23 compute-2 ceph-mon[77081]: pgmap v808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:24 compute-2 podman[226497]: 2026-01-22 13:55:24.006386849 +0000 UTC m=+0.061515666 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 13:55:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:24.029+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-2 ceph-mon[77081]: pgmap v809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:24 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:24 compute-2 sudo[226616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgycnmuomxcpajglzraafaikukswivym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769090123.8754086-3625-117867827046181/AnsiballZ_podman_container.py'
Jan 22 13:55:24 compute-2 sudo[226616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.359 226437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.359 226437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.360 226437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.360 226437 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 22 13:55:24 compute-2 python3.9[226618]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.494 226437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.506 226437 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:24 compute-2 nova_compute[226433]: 2026-01-22 13:55:24.506 226437 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 22 13:55:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:55:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:55:24 compute-2 systemd[1]: Started libpod-conmon-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4.scope.
Jan 22 13:55:24 compute-2 systemd[1]: Started libcrun container.
Jan 22 13:55:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 22 13:55:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:25.021+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:25 compute-2 podman[226649]: 2026-01-22 13:55:25.025103 +0000 UTC m=+0.469588513 container init 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:55:25 compute-2 podman[226649]: 2026-01-22 13:55:25.034272671 +0000 UTC m=+0.478758184 container start 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3)
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.040 226437 INFO nova.virt.driver [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 22 13:55:25 compute-2 python3.9[226618]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Applying nova statedir ownership
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 22 13:55:25 compute-2 nova_compute_init[226671]: INFO:nova_statedir:Nova statedir ownership complete
Jan 22 13:55:25 compute-2 systemd[1]: libpod-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4.scope: Deactivated successfully.
Jan 22 13:55:25 compute-2 podman[226683]: 2026-01-22 13:55:25.131340849 +0000 UTC m=+0.029999622 container died 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init)
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.153 226437 INFO nova.compute.provider_config [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 22 13:55:25 compute-2 sudo[226616]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.165 226437 DEBUG oslo_concurrency.lockutils [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.165 226437 DEBUG oslo_concurrency.lockutils [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.165 226437 DEBUG oslo_concurrency.lockutils [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 ceph-mon[77081]: pgmap v810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.192 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4-userdata-shm.mount: Deactivated successfully.
Jan 22 13:55:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce-merged.mount: Deactivated successfully.
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.193 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.193 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 WARNING oslo_config.cfg [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 13:55:25 compute-2 nova_compute[226433]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 13:55:25 compute-2 nova_compute[226433]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 13:55:25 compute-2 nova_compute[226433]: and ``live_migration_inbound_addr`` respectively.
Jan 22 13:55:25 compute-2 nova_compute[226433]: ).  Its value may be silently ignored in the future.
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.250 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.250 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.250 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 podman[226683]: 2026-01-22 13:55:25.259919081 +0000 UTC m=+0.158577834 container cleanup 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 systemd[1]: libpod-conmon-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4.scope: Deactivated successfully.
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.283 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.308 226437 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.378 226437 INFO nova.virt.node [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Determined node identity d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from /var/lib/nova/compute_id
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.379 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.380 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.380 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.380 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.391 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdd7ca57070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.393 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdd7ca57070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.394 226437 INFO nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Connection event '1' reason 'None'
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.401 226437 INFO nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]: 
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <host>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <uuid>5492a354-d192-4c48-8602-99be1884b049</uuid>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <arch>x86_64</arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model>EPYC-Rome-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <vendor>AMD</vendor>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <microcode version='16777317'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <signature family='23' model='49' stepping='0'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='x2apic'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='tsc-deadline'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='osxsave'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='hypervisor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='tsc_adjust'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='spec-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='stibp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='arch-capabilities'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='cmp_legacy'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='topoext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='virt-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='lbrv'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='tsc-scale'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='vmcb-clean'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='pause-filter'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='pfthreshold'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='svme-addr-chk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='rdctl-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='skip-l1dfl-vmentry'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='mds-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature name='pschange-mc-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <pages unit='KiB' size='4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <pages unit='KiB' size='2048'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <pages unit='KiB' size='1048576'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <power_management>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <suspend_mem/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </power_management>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <iommu support='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <migration_features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <live/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <uri_transports>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <uri_transport>tcp</uri_transport>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <uri_transport>rdma</uri_transport>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </uri_transports>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </migration_features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <topology>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <cells num='1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <cell id='0'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           <memory unit='KiB'>7864312</memory>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           <pages unit='KiB' size='2048'>0</pages>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           <distances>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <sibling id='0' value='10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           </distances>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           <cpus num='8'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:           </cpus>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         </cell>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </cells>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </topology>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <cache>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </cache>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <secmodel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model>selinux</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <doi>0</doi>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </secmodel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <secmodel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model>dac</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <doi>0</doi>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </secmodel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </host>
Jan 22 13:55:25 compute-2 nova_compute[226433]: 
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <guest>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <os_type>hvm</os_type>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <arch name='i686'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <wordsize>32</wordsize>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <domain type='qemu'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <domain type='kvm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <pae/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <nonpae/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <apic default='on' toggle='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <cpuselection/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <deviceboot/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <externalSnapshot/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </guest>
Jan 22 13:55:25 compute-2 nova_compute[226433]: 
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <guest>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <os_type>hvm</os_type>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <arch name='x86_64'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <wordsize>64</wordsize>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <domain type='qemu'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <domain type='kvm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <acpi default='on' toggle='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <apic default='on' toggle='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <cpuselection/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <deviceboot/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <disksnapshot default='on' toggle='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <externalSnapshot/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </guest>
Jan 22 13:55:25 compute-2 nova_compute[226433]: 
Jan 22 13:55:25 compute-2 nova_compute[226433]: </capabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]: 
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.409 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.413 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 13:55:25 compute-2 nova_compute[226433]: <domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <domain>kvm</domain>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <arch>i686</arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <vcpu max='4096'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <iothreads supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <os supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='firmware'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <loader supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>rom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pflash</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='readonly'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>yes</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='secure'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </loader>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </os>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='maximumMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <vendor>AMD</vendor>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='succor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='custom' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <memoryBacking supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='sourceType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>anonymous</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>memfd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </memoryBacking>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <disk supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='diskDevice'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>disk</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cdrom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>floppy</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>lun</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>fdc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>sata</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </disk>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <graphics supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vnc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egl-headless</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </graphics>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <video supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='modelType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vga</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cirrus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>none</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>bochs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ramfb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </video>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hostdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='mode'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>subsystem</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='startupPolicy'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>mandatory</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>requisite</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>optional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='subsysType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pci</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='capsType'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='pciBackend'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hostdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <rng supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>random</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </rng>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <filesystem supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='driverType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>path</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>handle</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtiofs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </filesystem>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tpm supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-tis</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-crb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emulator</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>external</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendVersion'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>2.0</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </tpm>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <redirdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </redirdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <channel supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </channel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <crypto supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </crypto>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <interface supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>passt</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </interface>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <panic supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>isa</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>hyperv</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </panic>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <console supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>null</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dev</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pipe</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stdio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>udp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tcp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu-vdagent</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </console>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <gic supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <genid supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backup supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <async-teardown supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <s390-pv supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <ps2 supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tdx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sev supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sgx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hyperv supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='features'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>relaxed</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vapic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>spinlocks</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vpindex</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>runtime</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>synic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stimer</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reset</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vendor_id</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>frequencies</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reenlightenment</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tlbflush</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ipi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>avic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emsr_bitmap</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>xmm_input</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hyperv>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <launchSecurity supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </features>
Jan 22 13:55:25 compute-2 nova_compute[226433]: </domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.425 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 13:55:25 compute-2 nova_compute[226433]: <domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <domain>kvm</domain>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <arch>i686</arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <vcpu max='240'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <iothreads supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <os supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='firmware'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <loader supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>rom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pflash</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='readonly'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>yes</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='secure'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </loader>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </os>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='maximumMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <vendor>AMD</vendor>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='succor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='custom' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:25.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <memoryBacking supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='sourceType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>anonymous</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>memfd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </memoryBacking>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <disk supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='diskDevice'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>disk</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cdrom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>floppy</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>lun</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ide</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>fdc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>sata</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </disk>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <graphics supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vnc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egl-headless</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </graphics>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <video supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='modelType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vga</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cirrus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>none</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>bochs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ramfb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </video>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hostdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='mode'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>subsystem</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='startupPolicy'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>mandatory</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>requisite</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>optional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='subsysType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pci</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='capsType'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='pciBackend'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hostdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <rng supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>random</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </rng>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <filesystem supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='driverType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>path</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>handle</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtiofs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </filesystem>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tpm supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-tis</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-crb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emulator</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>external</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendVersion'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>2.0</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </tpm>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <redirdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </redirdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <channel supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </channel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <crypto supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </crypto>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <interface supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>passt</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </interface>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <panic supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>isa</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>hyperv</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </panic>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <console supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>null</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dev</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pipe</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stdio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>udp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tcp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu-vdagent</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </console>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <gic supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <genid supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backup supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <async-teardown supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <s390-pv supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <ps2 supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tdx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sev supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sgx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hyperv supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='features'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>relaxed</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vapic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>spinlocks</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vpindex</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>runtime</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>synic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stimer</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reset</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vendor_id</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>frequencies</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reenlightenment</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tlbflush</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ipi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>avic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emsr_bitmap</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>xmm_input</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hyperv>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <launchSecurity supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </features>
Jan 22 13:55:25 compute-2 nova_compute[226433]: </domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.488 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.491 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 13:55:25 compute-2 nova_compute[226433]: <domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <domain>kvm</domain>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <arch>x86_64</arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <vcpu max='4096'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <iothreads supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <os supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='firmware'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>efi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <loader supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>rom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pflash</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='readonly'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>yes</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='secure'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>yes</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </loader>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </os>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='maximumMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <vendor>AMD</vendor>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='succor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='custom' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <memoryBacking supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='sourceType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>anonymous</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>memfd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </memoryBacking>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <disk supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='diskDevice'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>disk</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cdrom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>floppy</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>lun</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>fdc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>sata</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </disk>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <graphics supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vnc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egl-headless</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </graphics>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <video supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='modelType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vga</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cirrus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>none</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>bochs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ramfb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </video>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hostdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='mode'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>subsystem</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='startupPolicy'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>mandatory</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>requisite</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>optional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='subsysType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pci</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='capsType'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='pciBackend'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hostdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <rng supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>random</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </rng>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <filesystem supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='driverType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>path</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>handle</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtiofs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </filesystem>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tpm supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-tis</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-crb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emulator</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>external</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendVersion'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>2.0</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </tpm>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <redirdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </redirdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <channel supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </channel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <crypto supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </crypto>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <interface supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>passt</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </interface>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <panic supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>isa</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>hyperv</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </panic>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <console supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>null</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dev</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pipe</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stdio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>udp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tcp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu-vdagent</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </console>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <gic supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <genid supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backup supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <async-teardown supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <s390-pv supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <ps2 supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tdx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sev supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sgx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hyperv supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='features'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>relaxed</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vapic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>spinlocks</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vpindex</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>runtime</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>synic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stimer</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reset</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vendor_id</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>frequencies</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reenlightenment</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tlbflush</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ipi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>avic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emsr_bitmap</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>xmm_input</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hyperv>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <launchSecurity supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </features>
Jan 22 13:55:25 compute-2 nova_compute[226433]: </domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.571 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 13:55:25 compute-2 nova_compute[226433]: <domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <path>/usr/libexec/qemu-kvm</path>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <domain>kvm</domain>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <arch>x86_64</arch>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <vcpu max='240'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <iothreads supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <os supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='firmware'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <loader supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>rom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pflash</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='readonly'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>yes</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='secure'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>no</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </loader>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </os>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-passthrough' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='hostPassthroughMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='maximum' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='maximumMigratable'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>on</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>off</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='host-model' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <vendor>AMD</vendor>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='x2apic'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-deadline'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='hypervisor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc_adjust'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='spec-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='stibp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='cmp_legacy'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='overflow-recov'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='succor'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='amd-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='virt-ssbd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lbrv'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='tsc-scale'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='vmcb-clean'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='flushbyasid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pause-filter'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='pfthreshold'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='svme-addr-chk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <feature policy='disable' name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <mode name='custom' supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Broadwell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cascadelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='ClearwaterForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ddpd-u'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sha512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm3'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sm4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Cooperlake-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Denverton-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Dhyana-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Genoa-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Milan-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Rome-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-Turin-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amd-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='auto-ibrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vp2intersect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fs-gs-base-ns'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibpb-brtype'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='no-nested-data-bp'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='null-sel-clr-base'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='perfmon-v2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbpb'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='srso-user-kernel-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='stibp-always-on'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='EPYC-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='GraniteRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-128'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-256'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx10-512'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='prefetchiti'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Haswell-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-noTSX'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v6'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Icelake-Server-v7'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='IvyBridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='KnightsMill-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4fmaps'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-4vnniw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512er'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512pf'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G4-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Opteron_G5-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fma4'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tbm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xop'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SapphireRapids-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='amx-tile'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-bf16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-fp16'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512-vpopcntdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bitalg'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vbmi2'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrc'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fzrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='la57'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='taa-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='tsx-ldtrk'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='SierraForest-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ifma'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-ne-convert'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx-vnni-int8'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bhi-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='bus-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cmpccxadd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fbsdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='fsrs'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ibrs-all'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='intel-psfd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ipred-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='lam'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mcdt-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pbrsb-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='psdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rrsba-ctrl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='sbdr-ssdp-no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='serialize'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vaes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='vpclmulqdq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Client-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='hle'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='rtm'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Skylake-Server-v5'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512bw'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512cd'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512dq'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512f'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='avx512vl'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='invpcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pcid'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='pku'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='mpx'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v2'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v3'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='core-capability'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='split-lock-detect'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='Snowridge-v4'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='cldemote'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='erms'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='gfni'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdir64b'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='movdiri'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='xsaves'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='athlon-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='core2duo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='coreduo-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='n270-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='ss'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <blockers model='phenom-v1'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnow'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <feature name='3dnowext'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </blockers>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </mode>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <memoryBacking supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <enum name='sourceType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>anonymous</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <value>memfd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </memoryBacking>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <disk supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='diskDevice'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>disk</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cdrom</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>floppy</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>lun</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ide</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>fdc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>sata</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </disk>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <graphics supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vnc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egl-headless</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </graphics>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <video supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='modelType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vga</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>cirrus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>none</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>bochs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ramfb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </video>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hostdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='mode'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>subsystem</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='startupPolicy'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>mandatory</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>requisite</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>optional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='subsysType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pci</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>scsi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='capsType'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='pciBackend'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hostdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <rng supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtio-non-transitional</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>random</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>egd</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </rng>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <filesystem supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='driverType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>path</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>handle</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>virtiofs</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </filesystem>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tpm supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-tis</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tpm-crb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emulator</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>external</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendVersion'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>2.0</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </tpm>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <redirdev supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='bus'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>usb</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </redirdev>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <channel supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </channel>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <crypto supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendModel'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>builtin</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </crypto>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <interface supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='backendType'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>default</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>passt</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </interface>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <panic supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='model'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>isa</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>hyperv</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </panic>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <console supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='type'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>null</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vc</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pty</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dev</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>file</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>pipe</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stdio</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>udp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tcp</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>unix</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>qemu-vdagent</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>dbus</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </console>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </devices>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <features>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <gic supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <vmcoreinfo supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <genid supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backingStoreInput supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <backup supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <async-teardown supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <s390-pv supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <ps2 supported='yes'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <tdx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sev supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <sgx supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <hyperv supported='yes'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <enum name='features'>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>relaxed</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vapic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>spinlocks</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vpindex</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>runtime</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>synic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>stimer</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reset</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>vendor_id</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>frequencies</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>reenlightenment</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>tlbflush</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>ipi</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>avic</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>emsr_bitmap</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <value>xmm_input</value>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </enum>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       <defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <spinlocks>4095</spinlocks>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <stimer_direct>on</stimer_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_direct>on</tlbflush_direct>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <tlbflush_extended>on</tlbflush_extended>
Jan 22 13:55:25 compute-2 nova_compute[226433]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 13:55:25 compute-2 nova_compute[226433]:       </defaults>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     </hyperv>
Jan 22 13:55:25 compute-2 nova_compute[226433]:     <launchSecurity supported='no'/>
Jan 22 13:55:25 compute-2 nova_compute[226433]:   </features>
Jan 22 13:55:25 compute-2 nova_compute[226433]: </domainCapabilities>
Jan 22 13:55:25 compute-2 nova_compute[226433]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.634 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.634 226437 INFO nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Secure Boot support detected
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.636 226437 DEBUG nova.virt.libvirt.volume.mount [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.637 226437 INFO nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.637 226437 INFO nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.649 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 13:55:25 compute-2 nova_compute[226433]:   <model>Nehalem</model>
Jan 22 13:55:25 compute-2 nova_compute[226433]: </cpu>
Jan 22 13:55:25 compute-2 nova_compute[226433]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.651 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.700 226437 INFO nova.virt.node [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Determined node identity d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from /var/lib/nova/compute_id
Jan 22 13:55:25 compute-2 nova_compute[226433]: 2026-01-22 13:55:25.795 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Verified node d4dcb68c-0009-4467-a6f7-0e9fe0236fbc matches my host compute-2.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Jan 22 13:55:25 compute-2 sshd-session[202253]: Connection closed by 192.168.122.30 port 59414
Jan 22 13:55:25 compute-2 sshd-session[202250]: pam_unix(sshd:session): session closed for user zuul
Jan 22 13:55:25 compute-2 systemd[1]: session-49.scope: Deactivated successfully.
Jan 22 13:55:25 compute-2 systemd[1]: session-49.scope: Consumed 2min 615ms CPU time.
Jan 22 13:55:25 compute-2 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Jan 22 13:55:25 compute-2 systemd-logind[787]: Removed session 49.
Jan 22 13:55:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:25.981+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.062 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 22 13:55:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.795 226437 ERROR nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Could not retrieve compute node resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-77ac1ed1-4613-4939-b9ce-bd0ba145b90b"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-77ac1ed1-4613-4939-b9ce-bd0ba145b90b"}]}
Jan 22 13:55:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.825 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.825 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.826 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.826 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:55:26 compute-2 nova_compute[226433]: 2026-01-22 13:55:26.826 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:26.977+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:55:27 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/788234680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:27 compute-2 ceph-mon[77081]: pgmap v811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2972063964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.248 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.392 226437 WARNING nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.393 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5240MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.393 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.393 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 13:55:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.573 226437 ERROR nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-7a92631c-3b10-4c61-9675-104bac57ecff"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-7a92631c-3b10-4c61-9675-104bac57ecff"}]}
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.574 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:55:27 compute-2 nova_compute[226433]: 2026-01-22 13:55:27.574 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:55:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:27.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:28 compute-2 nova_compute[226433]: 2026-01-22 13:55:28.263 226437 INFO nova.scheduler.client.report [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [req-8c319d9d-459a-4b8f-91bb-e2a832e80a5c] Created resource provider record via placement API for resource provider with UUID d4dcb68c-0009-4467-a6f7-0e9fe0236fbc and name compute-2.ctlplane.example.com.
Jan 22 13:55:28 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/788234680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:28 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/4194248427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:28 compute-2 nova_compute[226433]: 2026-01-22 13:55:28.902 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:55:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:28.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:55:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3711 writes, 21K keys, 3711 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 3711 writes, 3711 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1676 writes, 8855 keys, 1676 commit groups, 1.0 writes per commit group, ingest: 15.75 MB, 0.03 MB/s
                                           Interval WAL: 1676 writes, 1676 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     80.5      0.29              0.06        11    0.027       0      0       0.0       0.0
                                             L6      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    125.1    105.3      0.79              0.20        10    0.079     53K   5365       0.0       0.0
                                            Sum      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     91.2     98.6      1.08              0.27        21    0.052     53K   5365       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.5     82.8     82.9      0.70              0.16        12    0.059     35K   3554       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    125.1    105.3      0.79              0.20        10    0.079     53K   5365       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     81.4      0.29              0.06        10    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 1.1 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 7.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(357,6.61 MB,2.17451%) FilterBlock(21,158.98 KB,0.0510718%) IndexBlock(21,261.39 KB,0.0839685%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 13:55:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:55:29 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2684260221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.361 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.366 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 22 13:55:29 compute-2 nova_compute[226433]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.366 226437 INFO nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] kernel doesn't support AMD SEV
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.367 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.367 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.369 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt baseline CPU <cpu>
Jan 22 13:55:29 compute-2 nova_compute[226433]:   <arch>x86_64</arch>
Jan 22 13:55:29 compute-2 nova_compute[226433]:   <model>Nehalem</model>
Jan 22 13:55:29 compute-2 nova_compute[226433]:   <vendor>AMD</vendor>
Jan 22 13:55:29 compute-2 nova_compute[226433]:   <topology sockets="8" cores="1" threads="1"/>
Jan 22 13:55:29 compute-2 nova_compute[226433]: </cpu>
Jan 22 13:55:29 compute-2 nova_compute[226433]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.512 226437 DEBUG nova.scheduler.client.report [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updated inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.513 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 22 13:55:29 compute-2 nova_compute[226433]: 2026-01-22 13:55:29.513 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 13:55:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:29 compute-2 ceph-mon[77081]: pgmap v812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2684260221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1006289607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:29.927+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:30 compute-2 nova_compute[226433]: 2026-01-22 13:55:30.546 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 22 13:55:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:30.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:30.923+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:31 compute-2 ceph-mon[77081]: pgmap v813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1504024635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:55:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:31.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:32.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:32 compute-2 nova_compute[226433]: 2026-01-22 13:55:32.835 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:55:32 compute-2 nova_compute[226433]: 2026-01-22 13:55:32.835 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:32 compute-2 nova_compute[226433]: 2026-01-22 13:55:32.836 226437 DEBUG nova.service [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 22 13:55:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:32.906+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:32 compute-2 nova_compute[226433]: 2026-01-22 13:55:32.933 226437 DEBUG nova.service [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 22 13:55:32 compute-2 nova_compute[226433]: 2026-01-22 13:55:32.934 226437 DEBUG nova.servicegroup.drivers.db [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 22 13:55:33 compute-2 rsyslogd[1002]: imjournal from <np0005592159:nova_compute>: begin to drop messages due to rate-limiting
Jan 22 13:55:33 compute-2 sudo[226802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:33 compute-2 sudo[226802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-2 sudo[226802]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:33.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:33 compute-2 sudo[226827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:55:33 compute-2 sudo[226827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-2 sudo[226827]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:33 compute-2 ceph-mon[77081]: pgmap v814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:33 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1124 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:33 compute-2 sudo[226852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:33 compute-2 sudo[226852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-2 sudo[226852]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:33 compute-2 sudo[226877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:55:33 compute-2 sudo[226877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:33.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:34 compute-2 sudo[226877]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 13:55:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:34.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 13:55:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:34.924+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:35 compute-2 ceph-mon[77081]: pgmap v815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:55:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:55:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:35.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:35.933+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:55:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:55:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:55:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:36.887+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:37 compute-2 ceph-mon[77081]: pgmap v816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:37.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:37.858+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:38.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:38.855+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:39 compute-2 podman[226935]: 2026-01-22 13:55:39.089933521 +0000 UTC m=+0.138500237 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:55:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:39 compute-2 ceph-mon[77081]: pgmap v817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1129 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:39.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:40 compute-2 sshd-session[226962]: Invalid user minima from 92.118.39.95 port 53096
Jan 22 13:55:40 compute-2 sudo[226964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:40 compute-2 sudo[226964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:40 compute-2 sudo[226964]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:40 compute-2 sudo[226990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:40 compute-2 sudo[226990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:40 compute-2 sudo[226990]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:40 compute-2 sshd-session[226962]: Connection closed by invalid user minima 92.118.39.95 port 53096 [preauth]
Jan 22 13:55:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:40 compute-2 ceph-mon[77081]: pgmap v818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:40.945+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:41 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:41.903+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:42 compute-2 sudo[227015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:55:42 compute-2 sudo[227015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:42 compute-2 sudo[227015]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:42 compute-2 sudo[227040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:55:42 compute-2 sudo[227040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:55:42 compute-2 sudo[227040]: pam_unix(sudo:session): session closed for user root
Jan 22 13:55:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:55:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:42 compute-2 ceph-mon[77081]: pgmap v819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:42.910+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:43.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:43.934+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:43 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1134 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:44.940+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:45 compute-2 ceph-mon[77081]: pgmap v820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:45.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:45.910+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:46.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:46.953+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:55:47.163 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:55:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:55:47.163 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:55:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:55:47.164 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:55:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:47 compute-2 ceph-mon[77081]: pgmap v821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:47.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:47.949+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:48.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:48.965+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:49.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:49.917+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:50 compute-2 ceph-mon[77081]: pgmap v822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:50 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:50.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:50.934+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:50 compute-2 nova_compute[226433]: 2026-01-22 13:55:50.935 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:55:51 compute-2 nova_compute[226433]: 2026-01-22 13:55:51.153 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:55:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:51 compute-2 ceph-mon[77081]: pgmap v823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:51.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:51.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:52.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:52.965+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:53 compute-2 ceph-mon[77081]: pgmap v824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:53.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:53.978+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:54 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:54.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:54.934+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:55 compute-2 podman[227072]: 2026-01-22 13:55:55.008341169 +0000 UTC m=+0.059732992 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:55:55 compute-2 ceph-mon[77081]: pgmap v825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:55.935+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:55:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:56.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:55:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:56.972+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:55:57 compute-2 ceph-mon[77081]: pgmap v826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:57.943+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:55:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:58.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:55:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:58.967+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:55:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:55:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:55:59 compute-2 ceph-mon[77081]: pgmap v827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:55:59 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1149 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:55:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:55:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:59.936+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:55:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:00 compute-2 sudo[227095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:00 compute-2 sudo[227095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:00 compute-2 sudo[227095]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:00 compute-2 sudo[227120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:00 compute-2 sudo[227120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:00 compute-2 sudo[227120]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:00 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:00 compute-2 ceph-mon[77081]: pgmap v828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:00.934+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:01.927+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:02.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:02.973+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:03 compute-2 ceph-mon[77081]: pgmap v829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:03.996+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:04 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:56:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:04.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:56:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:04.954+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:05.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:05.977+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:06 compute-2 ceph-mon[77081]: pgmap v830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:56:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:56:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:06.977+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:07.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:07 compute-2 ceph-mon[77081]: pgmap v831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 13:56:07 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141067726' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 13:56:07 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141067726' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:07.987+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:08 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2141067726' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:08 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2141067726' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:56:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:56:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:09.016+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:09.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:09 compute-2 ceph-mon[77081]: pgmap v832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:09 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1159 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:09 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3332621159' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:09 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3332621159' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:10.057+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:10 compute-2 podman[227149]: 2026-01-22 13:56:10.066273987 +0000 UTC m=+0.121284184 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 13:56:10 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:56:10 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2769503677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:56:10 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:10 compute-2 ceph-mon[77081]: pgmap v833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:11.010+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:11.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:11.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:12.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:12.969+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:13 compute-2 ceph-mon[77081]: pgmap v834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:13.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:14.009+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:14 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1164 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:15.006+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:15.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:15 compute-2 ceph-mon[77081]: pgmap v835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:16.056+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:16 compute-2 ceph-mon[77081]: pgmap v836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:17.027+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:18.051+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:56:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:56:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:19.038+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:19 compute-2 ceph-mon[77081]: pgmap v837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:19 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:56:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:19.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:56:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:20.016+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:20.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:20 compute-2 sudo[227183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:20 compute-2 sudo[227183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:20 compute-2 sudo[227183]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:20.969+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:20 compute-2 sudo[227208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:20 compute-2 sudo[227208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:20 compute-2 sudo[227208]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:21.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:21 compute-2 ceph-mon[77081]: pgmap v838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:21.952+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:56:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:56:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:22.963+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:23.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:23 compute-2 ceph-mon[77081]: pgmap v839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:23.997+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.518 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.519 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.520 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.520 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.551 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.552 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.552 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.553 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.553 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.553 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.553 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.553 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.596 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.597 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.597 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.598 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:56:24 compute-2 nova_compute[226433]: 2026-01-22 13:56:24.599 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:56:24 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1174 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:24.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:56:25 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/409233514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.105 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:56:25 compute-2 sshd-session[227235]: Invalid user solv from 45.148.10.240 port 38770
Jan 22 13:56:25 compute-2 sshd-session[227235]: Connection closed by invalid user solv 45.148.10.240 port 38770 [preauth]
Jan 22 13:56:25 compute-2 podman[227259]: 2026-01-22 13:56:25.242350589 +0000 UTC m=+0.089217980 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.305 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5318MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.306 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.306 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.489 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.489 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.507 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:56:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:25.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:25 compute-2 ceph-mon[77081]: pgmap v840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:25 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4187439219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:25 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/409233514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2838972096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:56:25 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3407441321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.948 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.955 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:56:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:25.969+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.985 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.987 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:56:25 compute-2 nova_compute[226433]: 2026-01-22 13:56:25.988 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:56:26 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3407441321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:26 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/577808434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:26.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:26.964+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:27 compute-2 ceph-mon[77081]: pgmap v841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/709703454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:56:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:27.946+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:28.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:28.994+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:29.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:29 compute-2 ceph-mon[77081]: pgmap v842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:29 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1179 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:30.034+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:30.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:30 compute-2 ceph-mon[77081]: pgmap v843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:31.023+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 13:56:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Cumulative writes: 5189 writes, 22K keys, 5189 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5189 writes, 796 syncs, 6.52 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 404 writes, 615 keys, 404 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                           Interval WAL: 404 writes, 189 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 13:56:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:31.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:32.006+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:32 compute-2 ceph-mon[77081]: pgmap v844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:33.049+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:33.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:34 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:34 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:34.058+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:35 compute-2 ceph-mon[77081]: pgmap v845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:35.063+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:36.096+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:37.079+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:37 compute-2 ceph-mon[77081]: pgmap v846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:37.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:38.091+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 22 13:56:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:38.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 22 13:56:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:39.122+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:39 compute-2 ceph-mon[77081]: pgmap v847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:39.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:40.117+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:40.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:41 compute-2 sudo[227327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:41 compute-2 sudo[227327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:41 compute-2 podman[227308]: 2026-01-22 13:56:41.092245169 +0000 UTC m=+0.148807531 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:56:41 compute-2 sudo[227327]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:41.151+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:41 compute-2 sudo[227359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:41 compute-2 sudo[227359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:41 compute-2 sudo[227359]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:41.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:41 compute-2 ceph-mon[77081]: pgmap v848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:41 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:42.195+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:42 compute-2 sudo[227384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:42 compute-2 sudo[227384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-2 sudo[227384]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:42 compute-2 sudo[227409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:56:42 compute-2 sudo[227409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-2 sudo[227409]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:42 compute-2 sudo[227435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:42 compute-2 sudo[227435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-2 sudo[227435]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:42 compute-2 sudo[227460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 13:56:42 compute-2 sudo[227460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:42.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:42 compute-2 sudo[227460]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:43 compute-2 sudo[227505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:43 compute-2 sudo[227505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:43 compute-2 sudo[227505]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:43 compute-2 sudo[227530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:56:43 compute-2 sudo[227530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:43 compute-2 sudo[227530]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:43.226+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:43 compute-2 sudo[227555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:43 compute-2 sudo[227555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:43 compute-2 sudo[227555]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:43 compute-2 sudo[227580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 13:56:43 compute-2 sudo[227580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:43 compute-2 ceph-mon[77081]: pgmap v849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 13:56:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 13:56:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:43.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:44 compute-2 podman[227679]: 2026-01-22 13:56:44.016982462 +0000 UTC m=+0.087819261 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 13:56:44 compute-2 podman[227679]: 2026-01-22 13:56:44.13928915 +0000 UTC m=+0.210125939 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:56:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:44.202+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:44 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:44 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:45 compute-2 podman[227838]: 2026-01-22 13:56:45.030788351 +0000 UTC m=+0.093135528 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:56:45 compute-2 podman[227838]: 2026-01-22 13:56:45.047788865 +0000 UTC m=+0.110135952 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 13:56:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:45.224+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:45 compute-2 podman[227902]: 2026-01-22 13:56:45.349598046 +0000 UTC m=+0.083902815 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 13:56:45 compute-2 podman[227902]: 2026-01-22 13:56:45.369940053 +0000 UTC m=+0.104244792 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 13:56:45 compute-2 sudo[227580]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:45.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:45 compute-2 sudo[227935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:45 compute-2 sudo[227935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:45 compute-2 sudo[227935]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:45 compute-2 sudo[227960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:56:45 compute-2 ceph-mon[77081]: pgmap v850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:45 compute-2 sudo[227960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:45 compute-2 sudo[227960]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:45 compute-2 sudo[227985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:45 compute-2 sudo[227985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:45 compute-2 sudo[227985]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:45 compute-2 sudo[228010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:56:45 compute-2 sudo[228010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:46.190+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:46 compute-2 sudo[228010]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:46.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:56:47.164 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:56:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:56:47.165 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:56:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:56:47.165 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:56:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:47.216+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:47 compute-2 ceph-mon[77081]: pgmap v851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:56:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:56:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:56:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:56:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:56:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:48.257+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:49 compute-2 ceph-mon[77081]: pgmap v852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:49.261+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:50 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:50.296+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:51 compute-2 ceph-mon[77081]: pgmap v853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:51.311+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:52.294+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:52.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:53 compute-2 ceph-mon[77081]: pgmap v854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:53.280+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:54 compute-2 sudo[228070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:56:54 compute-2 sudo[228070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:54 compute-2 sudo[228070]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:54 compute-2 sudo[228095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:56:54 compute-2 sudo[228095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:56:54 compute-2 sudo[228095]: pam_unix(sudo:session): session closed for user root
Jan 22 13:56:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:54.328+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:56:54 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:56:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:54.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:55.375+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:55 compute-2 ceph-mon[77081]: pgmap v855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:56 compute-2 podman[228121]: 2026-01-22 13:56:56.021440066 +0000 UTC m=+0.069462078 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:56:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:56.365+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:56:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 13:56:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:57.415+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:57 compute-2 ceph-mon[77081]: pgmap v856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:57.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 13:56:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 22 13:56:57 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 22 13:56:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:58.396+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:56:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:56:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:56:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:56:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:56:59.424+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:56:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:56:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:56:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:56:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:56:59.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:56:59 compute-2 ceph-mon[77081]: pgmap v857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:56:59 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 13:56:59 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:00.419+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:00.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:00 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:00 compute-2 ceph-mon[77081]: pgmap v858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Jan 22 13:57:01 compute-2 sudo[228141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:01 compute-2 sudo[228141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:01 compute-2 sudo[228141]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:01 compute-2 sudo[228166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:01 compute-2 sudo[228166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:01 compute-2 sudo[228166]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:01.418+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:57:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:01.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:57:01 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:02.370+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:02.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:03 compute-2 ceph-mon[77081]: pgmap v859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 13:57:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:03.326+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:03.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:04 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:04.279+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:04.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:05.312+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 13:57:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:05.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 13:57:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:05 compute-2 ceph-mon[77081]: pgmap v860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:06.345+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:07.339+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:57:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:07.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:57:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:08.382+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:08.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:09 compute-2 ceph-mon[77081]: pgmap v861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:09.384+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:09.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:10 compute-2 ceph-mon[77081]: pgmap v862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:10 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:10 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:10.373+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:11 compute-2 ceph-mon[77081]: pgmap v863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 0 B/s wr, 138 op/s
Jan 22 13:57:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:11.340+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:12 compute-2 podman[228196]: 2026-01-22 13:57:12.032071582 +0000 UTC m=+0.094520728 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 13:57:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:12.387+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:12.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:13.420+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:13 compute-2 ceph-mon[77081]: pgmap v864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 122 op/s
Jan 22 13:57:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:14.373+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:14 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:14.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:15.415+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:15 compute-2 ceph-mon[77081]: pgmap v865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 22 13:57:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 22 13:57:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:15.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 22 13:57:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:16.453+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:16.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:17.485+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:17 compute-2 ceph-mon[77081]: pgmap v866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 13:57:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190481051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:57:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 13:57:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190481051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:57:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:18.520+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2190481051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:57:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2190481051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:57:18 compute-2 ceph-mon[77081]: pgmap v867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 22 13:57:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:18.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 22 13:57:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:19.489+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:19.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:19 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:20.459+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:20 compute-2 ceph-mon[77081]: pgmap v868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:21.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:21 compute-2 sudo[228227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:21 compute-2 sudo[228227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:21 compute-2 sudo[228227]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:21.495+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:21 compute-2 sudo[228252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:21 compute-2 sudo[228252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:21 compute-2 sudo[228252]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:21.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:22.476+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:22 compute-2 ceph-mon[77081]: pgmap v869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:23.476+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:24 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:24.481+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:24 compute-2 ceph-mon[77081]: pgmap v870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:25.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:25.457+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:25.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:25 compute-2 nova_compute[226433]: 2026-01-22 13:57:25.980 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.000 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.000 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:57:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.022 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.023 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.023 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.023 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.024 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.024 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.024 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.024 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.024 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.054 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.054 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.054 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.054 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.055 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:57:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:26.458+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:57:26 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2928817451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.482 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.710 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.711 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5298MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.712 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.712 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.821 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.821 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:57:26 compute-2 nova_compute[226433]: 2026-01-22 13:57:26.849 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:57:26 compute-2 podman[228303]: 2026-01-22 13:57:26.995811311 +0000 UTC m=+0.061100550 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 13:57:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:27 compute-2 ceph-mon[77081]: pgmap v871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3781396254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2928817451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:57:27 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4110742391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:27 compute-2 nova_compute[226433]: 2026-01-22 13:57:27.310 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:57:27 compute-2 nova_compute[226433]: 2026-01-22 13:57:27.319 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:57:27 compute-2 nova_compute[226433]: 2026-01-22 13:57:27.429 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:57:27 compute-2 nova_compute[226433]: 2026-01-22 13:57:27.431 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:57:27 compute-2 nova_compute[226433]: 2026-01-22 13:57:27.432 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:57:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:27.488+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:27.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:27 compute-2 nova_compute[226433]: 2026-01-22 13:57:27.962 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:57:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:28 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2038610146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:28 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4110742391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:28.450+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:29.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:29 compute-2 ceph-mon[77081]: pgmap v872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/742374999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:29 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:29.427+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2685310246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:57:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:30.433+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:31.463+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:31 compute-2 ceph-mon[77081]: pgmap v873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.635396) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251635431, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2498, "num_deletes": 251, "total_data_size": 5080516, "memory_usage": 5160664, "flush_reason": "Manual Compaction"}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251661484, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 3317147, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21139, "largest_seqno": 23632, "table_properties": {"data_size": 3307579, "index_size": 5614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24430, "raw_average_key_size": 21, "raw_value_size": 3286471, "raw_average_value_size": 2905, "num_data_blocks": 244, "num_entries": 1131, "num_filter_entries": 1131, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090073, "oldest_key_time": 1769090073, "file_creation_time": 1769090251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 26134 microseconds, and 8884 cpu microseconds.
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.661530) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 3317147 bytes OK
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.661547) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.664815) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.664847) EVENT_LOG_v1 {"time_micros": 1769090251664841, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.664867) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 5069115, prev total WAL file size 5069115, number of live WAL files 2.
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.666748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(3239KB)], [42(8175KB)]
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251666815, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 11689317, "oldest_snapshot_seqno": -1}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5936 keys, 9842969 bytes, temperature: kUnknown
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251744402, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 9842969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9803959, "index_size": 23092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 153663, "raw_average_key_size": 25, "raw_value_size": 9696423, "raw_average_value_size": 1633, "num_data_blocks": 927, "num_entries": 5936, "num_filter_entries": 5936, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.744647) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9842969 bytes
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.746404) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.5 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6455, records dropped: 519 output_compression: NoCompression
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.746425) EVENT_LOG_v1 {"time_micros": 1769090251746415, "job": 24, "event": "compaction_finished", "compaction_time_micros": 77655, "compaction_time_cpu_micros": 26832, "output_level": 6, "num_output_files": 1, "total_output_size": 9842969, "num_input_records": 6455, "num_output_records": 5936, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251747617, "job": 24, "event": "table_file_deletion", "file_number": 44}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090251749669, "job": 24, "event": "table_file_deletion", "file_number": 42}
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.666638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.749790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.749794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.749796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.749798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:57:31.749800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:57:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:32.479+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:33.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:33.490+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:33.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:33 compute-2 ceph-mon[77081]: pgmap v874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:34.479+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:35 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:35 compute-2 ceph-mon[77081]: pgmap v875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:35.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:35.508+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:36.503+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:37.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:37 compute-2 ceph-mon[77081]: pgmap v876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:37.523+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:38.541+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:39.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:39 compute-2 ceph-mon[77081]: pgmap v877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1249 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:39.535+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:39.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:40.523+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:41.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:41 compute-2 ceph-mon[77081]: pgmap v878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:41 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:41.480+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:41 compute-2 sudo[228351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:41 compute-2 sudo[228351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:41 compute-2 sudo[228351]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:41 compute-2 sudo[228376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:41 compute-2 sudo[228376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:41 compute-2 sudo[228376]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:41.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:42.474+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:43 compute-2 podman[228402]: 2026-01-22 13:57:43.060230945 +0000 UTC m=+0.115486823 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 13:57:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:43.495+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:43 compute-2 ceph-mon[77081]: pgmap v879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:44.508+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:44 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:44 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1254 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:44 compute-2 ceph-mon[77081]: pgmap v880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:45.527+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:46.573+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:46 compute-2 ceph-mon[77081]: pgmap v881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:47.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:57:47.165 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:57:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:57:47.165 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:57:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:57:47.165 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:57:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:47.539+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:48.568+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:48 compute-2 ceph-mon[77081]: pgmap v882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:48 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1259 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:49.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:49.592+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:49.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:57:50.221 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 13:57:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:57:50.221 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 13:57:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:57:50.222 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 13:57:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:50.593+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:51 compute-2 ceph-mon[77081]: pgmap v883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:51.643+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:51.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:52.685+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:53.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:53 compute-2 ceph-mon[77081]: pgmap v884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:53.717+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:53.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:54 compute-2 sudo[228433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:54 compute-2 sudo[228433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-2 sudo[228433]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:54 compute-2 sudo[228458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:57:54 compute-2 sudo[228458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-2 sudo[228458]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:54 compute-2 sudo[228483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:57:54 compute-2 sudo[228483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-2 sudo[228483]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:54 compute-2 sudo[228509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:57:54 compute-2 sudo[228509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:57:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:54 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1264 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:54.683+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:55.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:55 compute-2 sudo[228509]: pam_unix(sudo:session): session closed for user root
Jan 22 13:57:55 compute-2 sshd-session[228534]: Invalid user mina from 92.118.39.95 port 60294
Jan 22 13:57:55 compute-2 sshd-session[228534]: Connection closed by invalid user mina 92.118.39.95 port 60294 [preauth]
Jan 22 13:57:55 compute-2 ceph-mon[77081]: pgmap v885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:57:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:57:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:57:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:57:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:57:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:57:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:55.666+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:56.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:57:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:57:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:57:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:57.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:57 compute-2 ceph-mon[77081]: pgmap v886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:57:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:57:58 compute-2 podman[228568]: 2026-01-22 13:57:58.045725596 +0000 UTC m=+0.093205475 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:57:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:58.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:57:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:57:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:57:59.684+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:57:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:59 compute-2 ceph-mon[77081]: pgmap v887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:57:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:57:59 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1269 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:57:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:57:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:57:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:57:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:00.682+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:00 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:01.650+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:01 compute-2 ceph-mon[77081]: pgmap v888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:01 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:01 compute-2 sudo[228589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:01 compute-2 sudo[228589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:01 compute-2 sudo[228589]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:01.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:01 compute-2 sudo[228614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:01 compute-2 sudo[228614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:01 compute-2 sudo[228614]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:02.653+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:02 compute-2 sudo[228640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:02 compute-2 sudo[228640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:02 compute-2 sudo[228640]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:58:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:58:02 compute-2 sudo[228665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:58:02 compute-2 sudo[228665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:02 compute-2 sudo[228665]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:03.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:03.678+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:03 compute-2 ceph-mon[77081]: pgmap v889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:03.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:04.629+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:04 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1274 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:04 compute-2 ceph-mon[77081]: pgmap v890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:05.599+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:05.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:06.628+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:06 compute-2 ceph-mon[77081]: pgmap v891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:07.587+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:07.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:08.618+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:08 compute-2 ceph-mon[77081]: pgmap v892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:08 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1279 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:09.650+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:10.650+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:11 compute-2 ceph-mon[77081]: pgmap v893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:11.668+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:11.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:12.655+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:13 compute-2 ceph-mon[77081]: pgmap v894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:13.654+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:13.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:14 compute-2 podman[228695]: 2026-01-22 13:58:14.072440974 +0000 UTC m=+0.115181755 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 22 13:58:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:14 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1284 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:14.617+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:15 compute-2 ceph-mon[77081]: pgmap v895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:15.598+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:15.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:16.591+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:17.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:17 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:17 compute-2 ceph-mon[77081]: pgmap v896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:17.573+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:17.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1890730645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:58:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1890730645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:58:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:18.548+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:19.567+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:19 compute-2 ceph-mon[77081]: pgmap v897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:19 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1289 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:19.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:20.521+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:21.548+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:21 compute-2 ceph-mon[77081]: pgmap v898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:21.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:22 compute-2 sudo[228725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:22 compute-2 sudo[228725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:22 compute-2 sudo[228725]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:22 compute-2 sudo[228750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:22 compute-2 sudo[228750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:22 compute-2 sudo[228750]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:22.542+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:22 compute-2 ceph-mon[77081]: pgmap v899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:23.496+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:24.498+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:24 compute-2 nova_compute[226433]: 2026-01-22 13:58:24.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:24 compute-2 nova_compute[226433]: 2026-01-22 13:58:24.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:24 compute-2 nova_compute[226433]: 2026-01-22 13:58:24.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:24 compute-2 nova_compute[226433]: 2026-01-22 13:58:24.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:58:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:24 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1294 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:24 compute-2 ceph-mon[77081]: pgmap v900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:25.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:25 compute-2 nova_compute[226433]: 2026-01-22 13:58:25.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:25 compute-2 nova_compute[226433]: 2026-01-22 13:58:25.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:25.536+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:25.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.542 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.543 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:26.565+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.575 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.576 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.576 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.576 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:58:26 compute-2 nova_compute[226433]: 2026-01-22 13:58:26.577 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:58:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:26 compute-2 ceph-mon[77081]: pgmap v901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:26 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2345469985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:58:27 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3226939497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.048 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:58:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:27.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.264 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.266 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5299MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.266 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.267 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.352 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.352 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.385 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:58:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:27.580+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:58:27 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/711554436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.870 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.878 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:58:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:27.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.916 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.919 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:58:27 compute-2 nova_compute[226433]: 2026-01-22 13:58:27.919 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:58:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3226939497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1562183962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/711554436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:28.595+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:28 compute-2 nova_compute[226433]: 2026-01-22 13:58:28.915 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:58:29 compute-2 podman[228823]: 2026-01-22 13:58:29.016101825 +0000 UTC m=+0.066837018 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 22 13:58:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:29.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:29 compute-2 ceph-mon[77081]: pgmap v902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:29 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1299 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2133378307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:29.573+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:29.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3382157991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:58:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:30.603+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:31.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:31.560+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:31 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:31 compute-2 ceph-mon[77081]: pgmap v903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:32.564+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:32 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:33.597+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:33 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:33 compute-2 ceph-mon[77081]: pgmap v904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:58:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:33.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:58:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:34.570+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:34 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:34 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:34 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1304 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:35.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:35.603+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:35 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:35 compute-2 ceph-mon[77081]: pgmap v905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:36.566+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:36 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:37.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:37.581+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:37 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:37 compute-2 ceph-mon[77081]: pgmap v906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:38.577+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:38 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:39 compute-2 sshd-session[228847]: Invalid user solv from 45.148.10.240 port 47020
Jan 22 13:58:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:39.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:39 compute-2 sshd-session[228847]: Connection closed by invalid user solv 45.148.10.240 port 47020 [preauth]
Jan 22 13:58:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:39.614+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:39 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:39 compute-2 ceph-mon[77081]: pgmap v907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1309 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:40.588+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:40 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:40 compute-2 ceph-mon[77081]: pgmap v908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:41.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:41.562+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:41 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 13:58:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:41.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 13:58:42 compute-2 sudo[228850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:42 compute-2 sudo[228850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:42 compute-2 sudo[228850]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:42 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:42 compute-2 sudo[228875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:58:42 compute-2 sudo[228875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:58:42 compute-2 sudo[228875]: pam_unix(sudo:session): session closed for user root
Jan 22 13:58:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:42.598+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:42 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:43 compute-2 ceph-mon[77081]: pgmap v909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:43.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:43.598+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:43 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:44 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:44 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1314 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:44.602+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:44 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:45 compute-2 podman[228902]: 2026-01-22 13:58:45.088483072 +0000 UTC m=+0.133912030 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 22 13:58:45 compute-2 ceph-mon[77081]: pgmap v910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:58:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:45.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:58:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:45.621+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:45 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:46.574+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:46 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:58:47.165 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:58:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:58:47.166 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:58:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:58:47.166 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:58:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:47.607+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:47 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:47.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:48 compute-2 ceph-mon[77081]: pgmap v911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:48.620+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:48 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:49 compute-2 ceph-mon[77081]: pgmap v912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:49 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1319 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:49.609+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:49 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:49.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:50.579+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:50 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:51 compute-2 ceph-mon[77081]: pgmap v913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:51.603+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:51 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:58:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:51.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:58:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:52.651+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:52 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:53 compute-2 ceph-mon[77081]: pgmap v914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:53.656+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:53 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:58:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:53.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:58:54 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:54 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1324 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:54.631+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:54 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:58:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:58:55 compute-2 ceph-mon[77081]: pgmap v915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:55.655+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:55 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:58:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:55.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:58:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:56.704+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:56 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:57 compute-2 ceph-mon[77081]: pgmap v916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:58:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:57.680+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:57 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:57.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:58 compute-2 sshd-session[228935]: Connection closed by authenticating user root 45.148.10.121 port 44462 [preauth]
Jan 22 13:58:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:58.638+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:58 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.892593) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090338892671, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1385, "num_deletes": 256, "total_data_size": 2437880, "memory_usage": 2475584, "flush_reason": "Manual Compaction"}
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090338910703, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1600863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23637, "largest_seqno": 25017, "table_properties": {"data_size": 1595448, "index_size": 2619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13862, "raw_average_key_size": 20, "raw_value_size": 1583357, "raw_average_value_size": 2291, "num_data_blocks": 116, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090252, "oldest_key_time": 1769090252, "file_creation_time": 1769090338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18166 microseconds, and 7738 cpu microseconds.
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.910768) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1600863 bytes OK
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.910791) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.913030) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.913053) EVENT_LOG_v1 {"time_micros": 1769090338913047, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.913076) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2431181, prev total WAL file size 2431181, number of live WAL files 2.
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.914197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1563KB)], [45(9612KB)]
Jan 22 13:58:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090338914229, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 11443832, "oldest_snapshot_seqno": -1}
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 6102 keys, 11294089 bytes, temperature: kUnknown
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090339006558, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 11294089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11252603, "index_size": 25120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 158858, "raw_average_key_size": 26, "raw_value_size": 11140698, "raw_average_value_size": 1825, "num_data_blocks": 1009, "num_entries": 6102, "num_filter_entries": 6102, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090338, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.006849) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 11294089 bytes
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.008473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.8 rd, 122.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(14.2) write-amplify(7.1) OK, records in: 6627, records dropped: 525 output_compression: NoCompression
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.008494) EVENT_LOG_v1 {"time_micros": 1769090339008484, "job": 26, "event": "compaction_finished", "compaction_time_micros": 92423, "compaction_time_cpu_micros": 44324, "output_level": 6, "num_output_files": 1, "total_output_size": 11294089, "num_input_records": 6627, "num_output_records": 6102, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090339009122, "job": 26, "event": "table_file_deletion", "file_number": 47}
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090339011360, "job": 26, "event": "table_file_deletion", "file_number": 45}
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:58.914157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.011481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.011487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.011490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.011493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:58:59.011496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:58:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:58:59.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:58:59.623+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:59 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:58:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:59 compute-2 ceph-mon[77081]: pgmap v917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:58:59 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:58:59 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1329 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:58:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:58:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:58:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:58:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:58:59 compute-2 podman[228938]: 2026-01-22 13:58:59.991833762 +0000 UTC m=+0.054065539 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 13:59:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:00.619+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:00 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:00 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:01.570+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:01 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:01 compute-2 ceph-mon[77081]: pgmap v918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:01 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:02 compute-2 sudo[228958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:02 compute-2 sudo[228958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:02 compute-2 sudo[228958]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:02 compute-2 sudo[228983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:02 compute-2 sudo[228983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:02 compute-2 sudo[228983]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:02.620+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:02 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:03 compute-2 sudo[229009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:03 compute-2 sudo[229009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-2 sudo[229009]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-2 sudo[229034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 13:59:03 compute-2 sudo[229034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-2 sudo[229034]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-2 sudo[229059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:03 compute-2 sudo[229059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-2 sudo[229059]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-2 sudo[229084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 13:59:03 compute-2 sudo[229084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:03 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:03.660+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:03 compute-2 sudo[229084]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:03 compute-2 ceph-mon[77081]: pgmap v919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:03 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:03.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:04 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:04.708+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:05 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1334 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:59:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 13:59:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:59:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 13:59:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 13:59:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 13:59:05 compute-2 ceph-mon[77081]: pgmap v920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:05 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:05.731+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:05.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:06 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:06.736+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:07 compute-2 ceph-mon[77081]: pgmap v921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:07.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:07 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:07.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 13:59:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 13:59:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:08 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:08.760+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:09 compute-2 ceph-mon[77081]: pgmap v922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:09 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:09 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:09.716+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:09.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:10 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:10.750+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:10 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:11 compute-2 sudo[229143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:11 compute-2 sudo[229143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:11 compute-2 sudo[229143]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:11 compute-2 sudo[229168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 13:59:11 compute-2 sudo[229168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:11 compute-2 sudo[229168]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:11 compute-2 ceph-mon[77081]: pgmap v923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:59:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 13:59:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:11.801+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:11 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:11.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:12.844+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:12 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:13 compute-2 ceph-mon[77081]: pgmap v924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:13.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:13 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:13.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:14 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:14.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:14 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:15 compute-2 ceph-mon[77081]: pgmap v925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:15.788+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:15 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:16 compute-2 podman[229195]: 2026-01-22 13:59:16.02498694 +0000 UTC m=+0.086046086 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 13:59:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:16.739+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:16 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:17.746+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:17 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:17.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:18 compute-2 ceph-mon[77081]: pgmap v926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3490422437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 13:59:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3490422437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 13:59:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:18.790+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:18 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:19.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:19 compute-2 ceph-mon[77081]: pgmap v927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:19 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:19 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:19.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:19 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:19.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:20.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:20 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:21.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:21.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:21 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:21 compute-2 ceph-mon[77081]: pgmap v928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:21.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:22 compute-2 sudo[229224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:22 compute-2 sudo[229224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:22 compute-2 sudo[229224]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:22 compute-2 sudo[229250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:22 compute-2 sudo[229250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:22 compute-2 sudo[229250]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:22.909+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:22 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:22 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:22 compute-2 ceph-mon[77081]: pgmap v929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:23.880+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:23 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.906794) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363906864, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 595, "num_deletes": 250, "total_data_size": 744313, "memory_usage": 756208, "flush_reason": "Manual Compaction"}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363911428, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 395730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25022, "largest_seqno": 25612, "table_properties": {"data_size": 392899, "index_size": 739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7937, "raw_average_key_size": 20, "raw_value_size": 386850, "raw_average_value_size": 999, "num_data_blocks": 31, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090339, "oldest_key_time": 1769090339, "file_creation_time": 1769090363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 4667 microseconds, and 2003 cpu microseconds.
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.911471) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 395730 bytes OK
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.911490) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913156) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913178) EVENT_LOG_v1 {"time_micros": 1769090363913172, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 740882, prev total WAL file size 740882, number of live WAL files 2.
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913679) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(386KB)], [48(10MB)]
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363913711, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11689819, "oldest_snapshot_seqno": -1}
Jan 22 13:59:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5982 keys, 7893106 bytes, temperature: kUnknown
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363975053, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 7893106, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7856861, "index_size": 20199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 156981, "raw_average_key_size": 26, "raw_value_size": 7751398, "raw_average_value_size": 1295, "num_data_blocks": 793, "num_entries": 5982, "num_filter_entries": 5982, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.975477) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7893106 bytes
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.977857) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.1 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(49.5) write-amplify(19.9) OK, records in: 6489, records dropped: 507 output_compression: NoCompression
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.977888) EVENT_LOG_v1 {"time_micros": 1769090363977873, "job": 28, "event": "compaction_finished", "compaction_time_micros": 61503, "compaction_time_cpu_micros": 21232, "output_level": 6, "num_output_files": 1, "total_output_size": 7893106, "num_input_records": 6489, "num_output_records": 5982, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363978443, "job": 28, "event": "table_file_deletion", "file_number": 50}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090363982718, "job": 28, "event": "table_file_deletion", "file_number": 48}
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.913612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.982821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.982827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.982829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.982832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:59:23.982835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 13:59:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:23 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:24 compute-2 nova_compute[226433]: 2026-01-22 13:59:24.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:24.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:24 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:24 compute-2 ceph-mon[77081]: pgmap v930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:25 compute-2 nova_compute[226433]: 2026-01-22 13:59:25.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:25 compute-2 nova_compute[226433]: 2026-01-22 13:59:25.538 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:25 compute-2 nova_compute[226433]: 2026-01-22 13:59:25.539 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:25 compute-2 nova_compute[226433]: 2026-01-22 13:59:25.539 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 13:59:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:25.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:25.822+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:25 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=404 latency=0.002000050s ======
Jan 22 13:59:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:25.854 +0000] "GET /healthcheck HTTP/1.1" 404 240 - "python-urllib3/1.26.5" - latency=0.002000050s
Jan 22 13:59:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:25.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:26 compute-2 nova_compute[226433]: 2026-01-22 13:59:26.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:26.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:26 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:27 compute-2 ceph-mon[77081]: pgmap v931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:27 compute-2 nova_compute[226433]: 2026-01-22 13:59:27.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:27 compute-2 nova_compute[226433]: 2026-01-22 13:59:27.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:27.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:27.788+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:27 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:27.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:28 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.534 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.534 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.565 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.565 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.565 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.565 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.566 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:59:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:28.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:28 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:59:28 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3122242723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:28 compute-2 nova_compute[226433]: 2026-01-22 13:59:28.999 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:59:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:29 compute-2 ceph-mon[77081]: pgmap v932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/4186869810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1265044308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3122242723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.175 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.176 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5296MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.176 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.177 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.307 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.308 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.326 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 13:59:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:29.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 13:59:29 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4290223506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.773 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.780 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.799 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.800 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 13:59:29 compute-2 nova_compute[226433]: 2026-01-22 13:59:29.801 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:59:29 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:29.839+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:30 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/4002500831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4290223506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2587536646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 13:59:30 compute-2 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:30.878+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:30 compute-2 podman[229323]: 2026-01-22 13:59:30.989757463 +0000 UTC m=+0.050789737 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 13:59:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Jan 22 13:59:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:31 compute-2 ceph-mon[77081]: pgmap v933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:31.891+0000 7f47f8ed4640 -1 osd.2 140 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:31 compute-2 ceph-osd[79779]: osd.2 140 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:31.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:32 compute-2 ceph-mon[77081]: osdmap e140: 3 total, 3 up, 3 in
Jan 22 13:59:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Jan 22 13:59:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:32.877+0000 7f47f8ed4640 -1 osd.2 141 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:32 compute-2 ceph-osd[79779]: osd.2 141 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:33 compute-2 ceph-mon[77081]: pgmap v935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:33 compute-2 ceph-mon[77081]: osdmap e141: 3 total, 3 up, 3 in
Jan 22 13:59:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Jan 22 13:59:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:33.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:33 compute-2 ceph-osd[79779]: osd.2 142 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:33.893+0000 7f47f8ed4640 -1 osd.2 142 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:33.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:34 compute-2 ceph-mon[77081]: osdmap e142: 3 total, 3 up, 3 in
Jan 22 13:59:34 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:34 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:34.913+0000 7f47f8ed4640 -1 osd.2 142 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:34 compute-2 ceph-osd[79779]: osd.2 142 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:35 compute-2 ceph-mon[77081]: pgmap v938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 1.3 MiB/s wr, 0 op/s
Jan 22 13:59:35 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:35.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:35.955+0000 7f47f8ed4640 -1 osd.2 142 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:35 compute-2 ceph-osd[79779]: osd.2 142 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:35.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:36 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Jan 22 13:59:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:36.940+0000 7f47f8ed4640 -1 osd.2 143 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:36 compute-2 ceph-osd[79779]: osd.2 143 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:37 compute-2 ceph-mon[77081]: pgmap v939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 174 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Jan 22 13:59:37 compute-2 ceph-mon[77081]: osdmap e143: 3 total, 3 up, 3 in
Jan 22 13:59:37 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:37.904+0000 7f47f8ed4640 -1 osd.2 143 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:37 compute-2 ceph-osd[79779]: osd.2 143 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:37.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:38 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 e144: 3 total, 3 up, 3 in
Jan 22 13:59:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:38.954+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:38 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:39.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:39 compute-2 ceph-mon[77081]: pgmap v941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 21 MiB data, 174 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 32 op/s
Jan 22 13:59:39 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:39 compute-2 ceph-mon[77081]: osdmap e144: 3 total, 3 up, 3 in
Jan 22 13:59:39 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:39.905+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:39 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:39.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:40 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:40.877+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:40 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:41.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:41.829+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:41 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:41 compute-2 ceph-mon[77081]: pgmap v943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 25 MiB data, 178 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 47 op/s
Jan 22 13:59:41 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:42 compute-2 sudo[229349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:42 compute-2 sudo[229349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:42 compute-2 sudo[229349]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:42 compute-2 sudo[229374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 13:59:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:42.860+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:42 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:42 compute-2 sudo[229374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 13:59:42 compute-2 sudo[229374]: pam_unix(sudo:session): session closed for user root
Jan 22 13:59:43 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:43 compute-2 ceph-mon[77081]: pgmap v944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 22 13:59:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:43.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:43.897+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:43 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:44 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:44 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:44.941+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:44 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:45 compute-2 ceph-mon[77081]: pgmap v945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 22 13:59:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:45.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:45.927+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:45 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 13:59:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 13:59:46 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:46.931+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:46 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:47 compute-2 podman[229401]: 2026-01-22 13:59:47.027783795 +0000 UTC m=+0.087963074 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 13:59:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:59:47.167 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 13:59:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:59:47.167 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 13:59:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 13:59:47.167 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 13:59:47 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:47 compute-2 ceph-mon[77081]: pgmap v946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 18 op/s
Jan 22 13:59:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:47.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:47.928+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:47 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:47.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:48 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:48.902+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:48 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:49 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:49 compute-2 ceph-mon[77081]: pgmap v947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s
Jan 22 13:59:49 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:49.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:49.915+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:49 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:50 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:50.944+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:50 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:51 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:51 compute-2 ceph-mon[77081]: pgmap v948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 1.4 MiB/s wr, 5 op/s
Jan 22 13:59:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:51.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:51.987+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:51 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:52 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:52.983+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:52 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:53 compute-2 ceph-mon[77081]: pgmap v949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 1.3 MiB/s wr, 4 op/s
Jan 22 13:59:53 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:53.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:53.993+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:53 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:53.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:54.971+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:54 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:55.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:55 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:55 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:55.994+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:55 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:59:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:55.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:59:56 compute-2 ceph-mon[77081]: pgmap v950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:56 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:57.010+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:57 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 13:59:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:57.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 13:59:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 13:59:57 compute-2 ceph-mon[77081]: pgmap v951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:57 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:57.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 13:59:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:58.047+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:58 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:58 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:58 compute-2 ceph-mon[77081]: pgmap v952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 13:59:58 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 13:59:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:59:59.030+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:59 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 13:59:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 13:59:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 13:59:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 13:59:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:59:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:59:59.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:00 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:00.071+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:00 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:00 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:00:00.593 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:00:00 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:00:00.594 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:00:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:01.060+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:01 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:01 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 4 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops
Jan 22 14:00:01 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 4 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops
Jan 22 14:00:01 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:01 compute-2 ceph-mon[77081]: pgmap v953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:02.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:02 compute-2 podman[229434]: 2026-01-22 14:00:02.023595647 +0000 UTC m=+0.075045056 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:00:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:02.069+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:02 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:02 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:02 compute-2 sudo[229454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:02 compute-2 sudo[229454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:02 compute-2 sudo[229454]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:03 compute-2 sudo[229479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:03 compute-2 sudo[229479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:03 compute-2 sudo[229479]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:03.093+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:03 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:03.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:04 compute-2 ceph-mon[77081]: pgmap v954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:04.120+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:04 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:05 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:05 compute-2 ceph-mon[77081]: pgmap v955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:05.109+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:05 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:05.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:06.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:06.067+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:06 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:06 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2336252334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:06 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:00:06.595 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:00:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:07.027+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:07 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:07 compute-2 ceph-mon[77081]: pgmap v956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:07.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:08.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:08.029+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:08 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:08 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:09.007+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:09 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:09.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:09 compute-2 ceph-mon[77081]: pgmap v957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:09 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:10.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:10.029+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:10 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:11.010+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:11 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:11 compute-2 sudo[229508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:11 compute-2 sudo[229508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-2 sudo[229508]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:11 compute-2 sudo[229533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:00:11 compute-2 sudo[229533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-2 sudo[229533]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:11 compute-2 sudo[229558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:11 compute-2 sudo[229558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-2 sudo[229558]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:11 compute-2 sudo[229583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:00:11 compute-2 sudo[229583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:11.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:11 compute-2 sudo[229583]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:11.971+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:11 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:12.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:12 compute-2 sshd-session[229640]: Invalid user ethereum from 92.118.39.95 port 39234
Jan 22 14:00:12 compute-2 sshd-session[229640]: Connection closed by invalid user ethereum 92.118.39.95 port 39234 [preauth]
Jan 22 14:00:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:12.928+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:12 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:13 compute-2 ceph-mon[77081]: pgmap v958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:00:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:00:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:00:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:13.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:13.973+0000 7f47f8ed4640 -1 osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:13 compute-2 ceph-osd[79779]: osd.2 144 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e145 e145: 3 total, 3 up, 3 in
Jan 22 14:00:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:14.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:14 compute-2 ceph-mon[77081]: pgmap v959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 5 op/s
Jan 22 14:00:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:14 compute-2 ceph-mon[77081]: osdmap e145: 3 total, 3 up, 3 in
Jan 22 14:00:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:14.978+0000 7f47f8ed4640 -1 osd.2 145 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:14 compute-2 ceph-osd[79779]: osd.2 145 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e146 e146: 3 total, 3 up, 3 in
Jan 22 14:00:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:15.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:15 compute-2 ceph-mon[77081]: pgmap v961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 825 KiB/s rd, 7 op/s
Jan 22 14:00:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:16.012+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:16 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:16.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:16 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:00:16 compute-2 ceph-mon[77081]: osdmap e146: 3 total, 3 up, 3 in
Jan 22 14:00:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:16.985+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:16 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:17.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:17.983+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:17 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:18.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:18 compute-2 podman[229645]: 2026-01-22 14:00:18.032097901 +0000 UTC m=+0.089410661 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 22 14:00:18 compute-2 ceph-mon[77081]: pgmap v963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:00:18 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:18 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:00:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/866052997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:00:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:00:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/866052997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:00:19 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:19.020+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/866052997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:00:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/866052997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:00:19 compute-2 ceph-mon[77081]: pgmap v964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:00:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:19.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:20.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:20.026+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:20 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:21.030+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:21 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.410252) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421410355, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1034, "num_deletes": 251, "total_data_size": 1752085, "memory_usage": 1776000, "flush_reason": "Manual Compaction"}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421422932, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1150713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25617, "largest_seqno": 26646, "table_properties": {"data_size": 1146145, "index_size": 2028, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11771, "raw_average_key_size": 20, "raw_value_size": 1136234, "raw_average_value_size": 2000, "num_data_blocks": 89, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090363, "oldest_key_time": 1769090363, "file_creation_time": 1769090421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 12741 microseconds, and 6715 cpu microseconds.
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.422991) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1150713 bytes OK
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.423019) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.426445) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.426472) EVENT_LOG_v1 {"time_micros": 1769090421426464, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.426494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1746768, prev total WAL file size 1746768, number of live WAL files 2.
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.427649) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1123KB)], [51(7708KB)]
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421427697, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 9043819, "oldest_snapshot_seqno": -1}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 6033 keys, 7301113 bytes, temperature: kUnknown
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421502616, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 7301113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7264988, "index_size": 19951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 159290, "raw_average_key_size": 26, "raw_value_size": 7158856, "raw_average_value_size": 1186, "num_data_blocks": 778, "num_entries": 6033, "num_filter_entries": 6033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.502869) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7301113 bytes
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.504553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.6 rd, 97.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.5 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(14.2) write-amplify(6.3) OK, records in: 6550, records dropped: 517 output_compression: NoCompression
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.504573) EVENT_LOG_v1 {"time_micros": 1769090421504564, "job": 30, "event": "compaction_finished", "compaction_time_micros": 75004, "compaction_time_cpu_micros": 35022, "output_level": 6, "num_output_files": 1, "total_output_size": 7301113, "num_input_records": 6550, "num_output_records": 6033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421504910, "job": 30, "event": "table_file_deletion", "file_number": 53}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090421506576, "job": 30, "event": "table_file_deletion", "file_number": 51}
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.427577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.506647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.506652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.506653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.506655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:00:21.506656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:00:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:21.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:22.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:22.072+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:22 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:22 compute-2 ceph-mon[77081]: pgmap v965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 511 B/s wr, 3 op/s
Jan 22 14:00:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:23 compute-2 sudo[229675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:23 compute-2 sudo[229675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:23 compute-2 sudo[229675]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:23.119+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:23 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:23 compute-2 sudo[229700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:23 compute-2 sudo[229700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:23 compute-2 sudo[229700]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:23 compute-2 ceph-mon[77081]: pgmap v966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 618 B/s wr, 2 op/s
Jan 22 14:00:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:23.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:24.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:24.090+0000 7f47f8ed4640 -1 osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:24 compute-2 ceph-osd[79779]: osd.2 146 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e147 e147: 3 total, 3 up, 3 in
Jan 22 14:00:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:24 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.545 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.545 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:00:24 compute-2 nova_compute[226433]: 2026-01-22 14:00:24.560 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:25.070+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:25 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:25 compute-2 sudo[229726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:25 compute-2 sudo[229726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:25 compute-2 sudo[229726]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:25 compute-2 sudo[229751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:00:25 compute-2 sudo[229751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:25 compute-2 sudo[229751]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:25.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:25 compute-2 ceph-mon[77081]: pgmap v967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 511 B/s wr, 1 op/s
Jan 22 14:00:25 compute-2 ceph-mon[77081]: osdmap e147: 3 total, 3 up, 3 in
Jan 22 14:00:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:00:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:00:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:26.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:26.110+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:26 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:26 compute-2 nova_compute[226433]: 2026-01-22 14:00:26.567 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:26 compute-2 nova_compute[226433]: 2026-01-22 14:00:26.568 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:27.103+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:27 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:27 compute-2 nova_compute[226433]: 2026-01-22 14:00:27.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:27 compute-2 nova_compute[226433]: 2026-01-22 14:00:27.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:00:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:27.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:28.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:28.064+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:28 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:28 compute-2 ceph-mon[77081]: pgmap v969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.563 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.564 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.564 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.564 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.564 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:00:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:00:28 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3842263417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:28 compute-2 nova_compute[226433]: 2026-01-22 14:00:28.967 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:00:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:29.044+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:29 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.152 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.154 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5287MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.154 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.154 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.394 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.395 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.515 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.619 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.620 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:00:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.636 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:00:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:29.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.660 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:00:29 compute-2 nova_compute[226433]: 2026-01-22 14:00:29.677 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:00:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:29 compute-2 ceph-mon[77081]: pgmap v970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3842263417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:29 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:00:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/726780379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:30.089+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:30 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:30 compute-2 nova_compute[226433]: 2026-01-22 14:00:30.103 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:00:30 compute-2 nova_compute[226433]: 2026-01-22 14:00:30.108 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:00:30 compute-2 nova_compute[226433]: 2026-01-22 14:00:30.150 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:00:30 compute-2 nova_compute[226433]: 2026-01-22 14:00:30.152 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:00:30 compute-2 nova_compute[226433]: 2026-01-22 14:00:30.152 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/726780379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1731822576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:31.090+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:31 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.147 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.148 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.148 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.148 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.165 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.165 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:31 compute-2 nova_compute[226433]: 2026-01-22 14:00:31.165 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:00:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:31.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:32.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:32.087+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:32 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:32 compute-2 ceph-mon[77081]: pgmap v971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Jan 22 14:00:32 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1032470193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:32 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3469774285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:33 compute-2 podman[229824]: 2026-01-22 14:00:33.00773075 +0000 UTC m=+0.064139033 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:00:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:33.118+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:33 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3949338357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:33 compute-2 ceph-mon[77081]: pgmap v972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:33.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:34.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:34.075+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:34 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:34 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:34 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:35.111+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:35 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:35.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:36 compute-2 ceph-mon[77081]: pgmap v973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:36.122+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:36 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:37.149+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:37 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:37.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:38 compute-2 ceph-mon[77081]: pgmap v974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:38.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:38.155+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:38 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:39.106+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:39 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:39 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:39 compute-2 ceph-mon[77081]: pgmap v975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:39.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:40.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:40.082+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:40 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:40 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:41.072+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:41 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:41 compute-2 ceph-mon[77081]: pgmap v976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:42 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:42.045+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:43 compute-2 ceph-osd[79779]: osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:43.092+0000 7f47f8ed4640 -1 osd.2 147 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:43 compute-2 sudo[229850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:43 compute-2 sudo[229850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:43 compute-2 sudo[229850]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:43 compute-2 sudo[229875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:00:43 compute-2 sudo[229875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:00:43 compute-2 sudo[229875]: pam_unix(sudo:session): session closed for user root
Jan 22 14:00:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e148 e148: 3 total, 3 up, 3 in
Jan 22 14:00:43 compute-2 ceph-mon[77081]: pgmap v977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:43 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:44.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:44 compute-2 ceph-osd[79779]: osd.2 148 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:44.111+0000 7f47f8ed4640 -1 osd.2 148 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:45 compute-2 ceph-osd[79779]: osd.2 148 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:45.154+0000 7f47f8ed4640 -1 osd.2 148 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:45 compute-2 ceph-mon[77081]: osdmap e148: 3 total, 3 up, 3 in
Jan 22 14:00:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:45 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e149 e149: 3 total, 3 up, 3 in
Jan 22 14:00:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:45.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:46.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:46 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:46.121+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:46 compute-2 ceph-mon[77081]: pgmap v979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:00:46 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:00:46 compute-2 ceph-mon[77081]: osdmap e149: 3 total, 3 up, 3 in
Jan 22 14:00:46 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:47 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:47.072+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:00:47.168 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:00:47.168 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:00:47.168 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:47 compute-2 ceph-mon[77081]: pgmap v981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 383 B/s wr, 0 op/s
Jan 22 14:00:47 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:48 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:48.029+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:48.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:48 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:49 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:49.037+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:49 compute-2 podman[229903]: 2026-01-22 14:00:49.075959413 +0000 UTC m=+0.125395000 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 14:00:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:50 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:50.030+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:50.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:50 compute-2 ceph-mon[77081]: pgmap v982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 383 B/s wr, 0 op/s
Jan 22 14:00:50 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:50 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:50 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:50.990+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:51 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:51 compute-2 ceph-mon[77081]: pgmap v983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 639 B/s wr, 1 op/s
Jan 22 14:00:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:51.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:51 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:51.989+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:52 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:52 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:52 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:52.963+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:53.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:53 compute-2 ceph-mon[77081]: pgmap v984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 365 B/s rd, 731 B/s wr, 1 op/s
Jan 22 14:00:53 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:53 compute-2 ceph-osd[79779]: osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:53.978+0000 7f47f8ed4640 -1 osd.2 149 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:00:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:54.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:00:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 e150: 3 total, 3 up, 3 in
Jan 22 14:00:54 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:54 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 1444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:00:54 compute-2 ceph-mon[77081]: osdmap e150: 3 total, 3 up, 3 in
Jan 22 14:00:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:54.990+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:54 compute-2 sshd-session[229934]: Invalid user solv from 45.148.10.240 port 54242
Jan 22 14:00:55 compute-2 sshd-session[229934]: Connection closed by invalid user solv 45.148.10.240 port 54242 [preauth]
Jan 22 14:00:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:55.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:55 compute-2 ceph-mon[77081]: pgmap v985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 614 B/s wr, 1 op/s
Jan 22 14:00:55 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:55.995+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:56.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:56.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:57 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:57 compute-2 ceph-mon[77081]: pgmap v987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.293 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquiring lock "e0e74330-96df-479f-8baf-53fbd2ccba91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.294 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Lock "e0e74330-96df-479f-8baf-53fbd2ccba91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.457 226437 DEBUG nova.compute.manager [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.563 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.564 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.571 226437 DEBUG nova.virt.hardware [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.571 226437 INFO nova.compute.claims [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:00:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:57.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:57 compute-2 nova_compute[226433]: 2026-01-22 14:00:57.695 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:00:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:00:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:57.982+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:00:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:00:58.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:00:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:00:58 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/646427276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.106 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.112 226437 DEBUG nova.compute.provider_tree [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:00:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:00:58 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/473326240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.135 226437 DEBUG nova.scheduler.client.report [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:00:58 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:58 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/646427276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:58 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/473326240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.470 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.471 226437 DEBUG nova.compute.manager [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.594 226437 DEBUG nova.compute.manager [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.594 226437 DEBUG nova.network.neutron [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.837 226437 INFO nova.virt.libvirt.driver [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:00:58 compute-2 nova_compute[226433]: 2026-01-22 14:00:58.877 226437 DEBUG nova.compute.manager [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:00:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:58.999+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.032 226437 DEBUG nova.compute.manager [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.034 226437 DEBUG nova.virt.libvirt.driver [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.035 226437 INFO nova.virt.libvirt.driver [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Creating image(s)
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.075 226437 DEBUG nova.storage.rbd_utils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] rbd image e0e74330-96df-479f-8baf-53fbd2ccba91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.105 226437 DEBUG nova.storage.rbd_utils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] rbd image e0e74330-96df-479f-8baf-53fbd2ccba91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.140 226437 DEBUG nova.storage.rbd_utils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] rbd image e0e74330-96df-479f-8baf-53fbd2ccba91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.145 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.146 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:00:59 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:59 compute-2 ceph-mon[77081]: pgmap v988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 204 B/s rd, 307 B/s wr, 0 op/s
Jan 22 14:00:59 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.612 226437 WARNING oslo_policy.policy [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.612 226437 WARNING oslo_policy.policy [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.615 226437 DEBUG nova.policy [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0543a9d7720b47b580746e523aa51e97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '87e683d63c47432aa4cffe28b42e8de7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:00:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:00:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:00:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:00:59.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:00:59 compute-2 nova_compute[226433]: 2026-01-22 14:00:59.808 226437 DEBUG nova.virt.libvirt.imagebackend [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Image locations are: [{'url': 'rbd://088fe176-0106-5401-803c-2da38b73b76a/images/dc084f46-456d-429d-85f6-836af4fccd82/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://088fe176-0106-5401-803c-2da38b73b76a/images/dc084f46-456d-429d-85f6-836af4fccd82/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 22 14:00:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:00:59.990+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:00:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:00.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:00 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 1449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:00 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:00.955+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:01.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:01 compute-2 ceph-mon[77081]: pgmap v989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 102 B/s wr, 0 op/s
Jan 22 14:01:01 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:01 compute-2 nova_compute[226433]: 2026-01-22 14:01:01.786 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:01 compute-2 nova_compute[226433]: 2026-01-22 14:01:01.808 226437 DEBUG nova.network.neutron [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Successfully created port: 5ba36b18-c922-4b29-af7a-c790a2063b41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 22 14:01:01 compute-2 nova_compute[226433]: 2026-01-22 14:01:01.842 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.part --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:01:01 compute-2 nova_compute[226433]: 2026-01-22 14:01:01.843 226437 DEBUG nova.virt.images [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] dc084f46-456d-429d-85f6-836af4fccd82 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 22 14:01:01 compute-2 nova_compute[226433]: 2026-01-22 14:01:01.844 226437 DEBUG nova.privsep.utils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 22 14:01:01 compute-2 nova_compute[226433]: 2026-01-22 14:01:01.844 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.part /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:01 compute-2 CROND[230019]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 14:01:01 compute-2 run-parts[230023]: (/etc/cron.hourly) starting 0anacron
Jan 22 14:01:01 compute-2 run-parts[230036]: (/etc/cron.hourly) finished 0anacron
Jan 22 14:01:01 compute-2 CROND[230018]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 14:01:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:01.993+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:02 compute-2 nova_compute[226433]: 2026-01-22 14:01:02.014 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.part /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.converted" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:01:02 compute-2 nova_compute[226433]: 2026-01-22 14:01:02.019 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:01:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:01:02 compute-2 nova_compute[226433]: 2026-01-22 14:01:02.070 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:01:02 compute-2 nova_compute[226433]: 2026-01-22 14:01:02.071 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:01:02 compute-2 nova_compute[226433]: 2026-01-22 14:01:02.099 226437 DEBUG nova.storage.rbd_utils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] rbd image e0e74330-96df-479f-8baf-53fbd2ccba91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:01:02 compute-2 nova_compute[226433]: 2026-01-22 14:01:02.103 226437 DEBUG oslo_concurrency.processutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 e0e74330-96df-479f-8baf-53fbd2ccba91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:03 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:01:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:01:03.404 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:01:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:01:03.405 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:01:03 compute-2 sudo[230079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:03 compute-2 sudo[230079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:03 compute-2 sudo[230079]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:03 compute-2 podman[230103]: 2026-01-22 14:01:03.494980412 +0000 UTC m=+0.046134437 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 14:01:03 compute-2 sudo[230110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:03 compute-2 sudo[230110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:03 compute-2 sudo[230110]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:03 compute-2 nova_compute[226433]: 2026-01-22 14:01:03.786 226437 DEBUG nova.network.neutron [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Successfully updated port: 5ba36b18-c922-4b29-af7a-c790a2063b41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:01:03 compute-2 nova_compute[226433]: 2026-01-22 14:01:03.802 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquiring lock "refresh_cache-e0e74330-96df-479f-8baf-53fbd2ccba91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:01:03 compute-2 nova_compute[226433]: 2026-01-22 14:01:03.803 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquired lock "refresh_cache-e0e74330-96df-479f-8baf-53fbd2ccba91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:01:03 compute-2 nova_compute[226433]: 2026-01-22 14:01:03.803 226437 DEBUG nova.network.neutron [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:01:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:04.010+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:04.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:04 compute-2 nova_compute[226433]: 2026-01-22 14:01:04.093 226437 DEBUG nova.network.neutron [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:01:04 compute-2 nova_compute[226433]: 2026-01-22 14:01:04.322 226437 DEBUG nova.compute.manager [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Received event network-changed-5ba36b18-c922-4b29-af7a-c790a2063b41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:01:04 compute-2 nova_compute[226433]: 2026-01-22 14:01:04.323 226437 DEBUG nova.compute.manager [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Refreshing instance network info cache due to event network-changed-5ba36b18-c922-4b29-af7a-c790a2063b41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:01:04 compute-2 nova_compute[226433]: 2026-01-22 14:01:04.323 226437 DEBUG oslo_concurrency.lockutils [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-e0e74330-96df-479f-8baf-53fbd2ccba91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:01:04 compute-2 ceph-mon[77081]: pgmap v990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 102 B/s wr, 7 op/s
Jan 22 14:01:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:04.990+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:05 compute-2 nova_compute[226433]: 2026-01-22 14:01:05.237 226437 DEBUG nova.network.neutron [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Updating instance_info_cache with network_info: [{"id": "5ba36b18-c922-4b29-af7a-c790a2063b41", "address": "fa:16:3e:60:fd:b0", "network": {"id": "5cfd4647-c999-4e18-9ac8-73f14a80f11d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1765217249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87e683d63c47432aa4cffe28b42e8de7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ba36b18-c9", "ovs_interfaceid": "5ba36b18-c922-4b29-af7a-c790a2063b41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:01:05 compute-2 nova_compute[226433]: 2026-01-22 14:01:05.259 226437 DEBUG oslo_concurrency.lockutils [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Releasing lock "refresh_cache-e0e74330-96df-479f-8baf-53fbd2ccba91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:01:05 compute-2 nova_compute[226433]: 2026-01-22 14:01:05.260 226437 DEBUG nova.compute.manager [None req-1844ae2b-83eb-40a7-b9cd-6ccad3708f70 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Instance network_info: |[{"id": "5ba36b18-c922-4b29-af7a-c790a2063b41", "address": "fa:16:3e:60:fd:b0", "network": {"id": "5cfd4647-c999-4e18-9ac8-73f14a80f11d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1765217249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87e683d63c47432aa4cffe28b42e8de7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ba36b18-c9", "ovs_interfaceid": "5ba36b18-c922-4b29-af7a-c790a2063b41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:01:05 compute-2 nova_compute[226433]: 2026-01-22 14:01:05.260 226437 DEBUG oslo_concurrency.lockutils [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-e0e74330-96df-479f-8baf-53fbd2ccba91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:01:05 compute-2 nova_compute[226433]: 2026-01-22 14:01:05.261 226437 DEBUG nova.network.neutron [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Refreshing network info cache for port 5ba36b18-c922-4b29-af7a-c790a2063b41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:01:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:05 compute-2 ceph-mon[77081]: pgmap v991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 51 MiB data, 203 MiB used, 21 GiB / 21 GiB avail; 825 KiB/s rd, 874 KiB/s wr, 8 op/s
Jan 22 14:01:05 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:05 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:05.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:05.996+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:01:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:01:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:06.989+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:07.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:07 compute-2 ceph-mon[77081]: pgmap v992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 23 op/s
Jan 22 14:01:07 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:07.975+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:08 compute-2 nova_compute[226433]: 2026-01-22 14:01:08.387 226437 DEBUG nova.network.neutron [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Updated VIF entry in instance network info cache for port 5ba36b18-c922-4b29-af7a-c790a2063b41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:01:08 compute-2 nova_compute[226433]: 2026-01-22 14:01:08.388 226437 DEBUG nova.network.neutron [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Updating instance_info_cache with network_info: [{"id": "5ba36b18-c922-4b29-af7a-c790a2063b41", "address": "fa:16:3e:60:fd:b0", "network": {"id": "5cfd4647-c999-4e18-9ac8-73f14a80f11d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1765217249-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "87e683d63c47432aa4cffe28b42e8de7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ba36b18-c9", "ovs_interfaceid": "5ba36b18-c922-4b29-af7a-c790a2063b41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:01:08 compute-2 nova_compute[226433]: 2026-01-22 14:01:08.412 226437 DEBUG oslo_concurrency.lockutils [req-2beb5a2b-46cf-4357-b266-580091ae2eec req-7a9c087d-2f3d-4331-bafb-df1b0a90e7a3 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-e0e74330-96df-479f-8baf-53fbd2ccba91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:01:08 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:08.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:01:09.407 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:01:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:09.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:09.949+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:10.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:10 compute-2 ceph-mon[77081]: pgmap v993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 22 op/s
Jan 22 14:01:10 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:10 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:10.963+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:11 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:11 compute-2 ceph-mon[77081]: pgmap v994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 22 op/s
Jan 22 14:01:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:12.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:12.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:13.015+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:13 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:13.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:14.037+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:14.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:14 compute-2 ceph-mon[77081]: pgmap v995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 22 op/s
Jan 22 14:01:14 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:15.034+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:15 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:15 compute-2 ceph-mon[77081]: pgmap v996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 22 14:01:15 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:15.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:16.002+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:16 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:17.004+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:17.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:17 compute-2 ceph-mon[77081]: pgmap v997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 15 op/s
Jan 22 14:01:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:17 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1460541179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:18.013+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:19.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2016890132' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:01:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2016890132' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:01:19 compute-2 ceph-mon[77081]: pgmap v998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 84 MiB data, 215 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:19.998+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:20 compute-2 podman[230156]: 2026-01-22 14:01:20.037774107 +0000 UTC m=+0.099673480 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 14:01:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:20.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:20 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:21.024+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:21 compute-2 ceph-mon[77081]: pgmap v999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 91 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 114 KiB/s wr, 1 op/s
Jan 22 14:01:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:21.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:22.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:22.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:23.068+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:23 compute-2 ceph-mon[77081]: pgmap v1000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 231 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:23 compute-2 sudo[230184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:23 compute-2 sudo[230184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:23 compute-2 sudo[230184]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:23 compute-2 sudo[230209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:23 compute-2 sudo[230209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:23 compute-2 sudo[230209]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:23.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:24.031+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:24.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:25.022+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:25 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:25 compute-2 ceph-mon[77081]: pgmap v1001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:25 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:25 compute-2 sudo[230235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:25 compute-2 sudo[230235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-2 sudo[230235]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:25 compute-2 sudo[230260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:01:25 compute-2 sudo[230260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-2 sudo[230260]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:25 compute-2 sudo[230285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:25 compute-2 sudo[230285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-2 sudo[230285]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:25 compute-2 sudo[230310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:01:25 compute-2 sudo[230310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:25.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:26.007+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:26.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:26 compute-2 sudo[230310]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:26 compute-2 nova_compute[226433]: 2026-01-22 14:01:26.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:26 compute-2 nova_compute[226433]: 2026-01-22 14:01:26.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:26.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:27 compute-2 ceph-mon[77081]: pgmap v1002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:01:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:01:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:01:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:01:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:01:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:01:27 compute-2 nova_compute[226433]: 2026-01-22 14:01:27.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:27 compute-2 nova_compute[226433]: 2026-01-22 14:01:27.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:01:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:27.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:01:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:28.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:01:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:28 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.542 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:01:28 compute-2 nova_compute[226433]: 2026-01-22 14:01:28.543 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:28.957+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:01:29 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1207234773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.020 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.176 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5227MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.178 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.178 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.252 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.253 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.253 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.290 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:29 compute-2 ceph-mon[77081]: pgmap v1003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:29 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1207234773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:01:29 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3234656352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:29.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.738 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.744 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.796 226437 ERROR nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [req-afe624e8-72de-4d59-881f-0534eaecd57b] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID d4dcb68c-0009-4467-a6f7-0e9fe0236fbc.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-afe624e8-72de-4d59-881f-0534eaecd57b"}]}
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.827 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.846 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.846 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.867 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.892 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:01:29 compute-2 nova_compute[226433]: 2026-01-22 14:01:29.930 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:01:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:29.985+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:01:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2414432863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.339 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.346 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:01:30 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3234656352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:30 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2414432863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.507 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updated inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.507 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.507 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.534 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:01:30 compute-2 nova_compute[226433]: 2026-01-22 14:01:30.534 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:01:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:31.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:31 compute-2 ceph-mon[77081]: pgmap v1004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:01:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.531 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.555 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.555 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.556 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.573 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.574 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.574 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:31 compute-2 nova_compute[226433]: 2026-01-22 14:01:31.574 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:01:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:31.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:32.013+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:32 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:33.011+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:33 compute-2 sudo[230436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:33 compute-2 sudo[230436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:33 compute-2 sudo[230436]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:33 compute-2 ceph-mon[77081]: pgmap v1005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 7.9 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 22 14:01:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1550481817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:01:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:01:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:01:33 compute-2 sudo[230461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:01:33 compute-2 sudo[230461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:33 compute-2 sudo[230461]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:33 compute-2 podman[230485]: 2026-01-22 14:01:33.598724489 +0000 UTC m=+0.049335472 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 14:01:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:01:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:33.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:01:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:33.992+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:34.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2455143070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:01:34 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:34 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 1484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:34.949+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:35.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:35 compute-2 ceph-mon[77081]: pgmap v1006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:35 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:35.993+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:36 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:36.955+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:37 compute-2 ceph-mon[77081]: pgmap v1007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:37 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:37.926+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:01:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:38.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:01:38 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:38.921+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:39.934+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:40 compute-2 ceph-mon[77081]: pgmap v1008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:40 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:40 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 1489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:40.938+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:41 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:41 compute-2 ceph-mon[77081]: pgmap v1009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:41.909+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:01:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:01:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:42.929+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:43 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:43 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:43 compute-2 sudo[230509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:43 compute-2 sudo[230509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:01:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:01:43 compute-2 sudo[230509]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:43 compute-2 sudo[230534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:01:43 compute-2 sudo[230534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:01:43 compute-2 sudo[230534]: pam_unix(sudo:session): session closed for user root
Jan 22 14:01:43 compute-2 ceph-mon[77081]: pgmap v1010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:43 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:43.930+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:44.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:44.923+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:44 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:44 compute-2 ceph-mon[77081]: pgmap v1011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:44 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 1494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:45.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:46 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:46.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:46.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:47 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:47 compute-2 ceph-mon[77081]: pgmap v1012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:01:47.169 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:01:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:01:47.170 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:01:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:01:47.170 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:01:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:47.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:47.852+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:48 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:48.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:48.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:49 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:01:49 compute-2 ceph-mon[77081]: pgmap v1013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:49.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:49.849+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:50 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:50 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 1499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:50.835+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:51 compute-2 podman[230564]: 2026-01-22 14:01:51.048150035 +0000 UTC m=+0.106345248 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 14:01:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:51 compute-2 ceph-mon[77081]: pgmap v1014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:51.825+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:52.808+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:53 compute-2 ceph-mon[77081]: pgmap v1015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:53.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:53.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:54.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:54 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:54.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:55 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:55 compute-2 ceph-mon[77081]: pgmap v1016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:55 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:01:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:55.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:56.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:56.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:57 compute-2 ceph-mon[77081]: pgmap v1017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:57 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:57.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:01:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:01:58.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:01:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:01:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:58.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:59 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:01:59.704+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:01:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:01:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:01:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:01:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:01:59 compute-2 ceph-mon[77081]: pgmap v1018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:01:59 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:01:59 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:00.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:00.749+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:01 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:01.709+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:02 compute-2 ceph-mon[77081]: pgmap v1019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:02 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:02.690+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:03 compute-2 ceph-mon[77081]: pgmap v1020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:03.669+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:03 compute-2 sudo[230597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:03 compute-2 sudo[230597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:03 compute-2 sudo[230597]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:04 compute-2 podman[230596]: 2026-01-22 14:02:04.000516959 +0000 UTC m=+0.063668314 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:02:04 compute-2 sudo[230640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:04 compute-2 sudo[230640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:04 compute-2 sudo[230640]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:04 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:04.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:04.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:05 compute-2 ceph-mon[77081]: pgmap v1021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:05 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:05.602+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:06.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:06 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:06.599+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:07 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:07 compute-2 ceph-mon[77081]: pgmap v1022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:07.601+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:08 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:08 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:08.638+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:09 compute-2 ceph-mon[77081]: pgmap v1023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:09 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:09.619+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:10.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:11 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1519 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:11 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:11.585+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:11.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:12.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:12.555+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:12 compute-2 ceph-mon[77081]: pgmap v1024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:12 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:13.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:13 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:13 compute-2 ceph-mon[77081]: pgmap v1025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:13 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:13.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:14.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:14.592+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:14 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:14 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1524 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:15.612+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:15 compute-2 ceph-mon[77081]: pgmap v1026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:15 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:15.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:16.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:16.644+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:16 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:17.616+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:18.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:18 compute-2 ceph-mon[77081]: pgmap v1027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:18 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:02:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3408450385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:02:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:02:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3408450385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:02:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:18.642+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:18 compute-2 sshd-session[230672]: Invalid user eth from 92.118.39.95 port 46476
Jan 22 14:02:19 compute-2 sshd-session[230672]: Connection closed by invalid user eth 92.118.39.95 port 46476 [preauth]
Jan 22 14:02:19 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:19 compute-2 ceph-mon[77081]: pgmap v1028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3408450385' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:02:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3408450385' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:02:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:19.623+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:19.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:20.667+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:20 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:20 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:21.645+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:21.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:22 compute-2 podman[230676]: 2026-01-22 14:02:22.109035732 +0000 UTC m=+0.165078749 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:02:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:22.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:22.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:23 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:23 compute-2 ceph-mon[77081]: pgmap v1029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:23.627+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:23.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:24.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:24 compute-2 sudo[230703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:24 compute-2 sudo[230703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:24 compute-2 sudo[230703]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:24 compute-2 sudo[230728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:24 compute-2 sudo[230728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:24 compute-2 sudo[230728]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:24.578+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:25 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:25 compute-2 ceph-mon[77081]: pgmap v1030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:25.544+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:25.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:26.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:26.542+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:26 compute-2 ceph-mon[77081]: pgmap v1031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:27 compute-2 nova_compute[226433]: 2026-01-22 14:02:27.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:27.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: pgmap v1032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:27 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:27.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:28.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:28 compute-2 nova_compute[226433]: 2026-01-22 14:02:28.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:28.607+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:28 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.548 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:02:29 compute-2 nova_compute[226433]: 2026-01-22 14:02:29.548 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:02:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:29.603+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:29.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:02:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3658520421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:30 compute-2 ceph-mon[77081]: pgmap v1033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:30 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.025 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:02:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.222 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.223 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5268MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.223 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.224 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.380 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:02:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:30.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:02:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2414825092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.814 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.820 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.841 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.844 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:02:30 compute-2 nova_compute[226433]: 2026-01-22 14:02:30.845 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:02:31 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3658520421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:31 compute-2 ceph-mon[77081]: pgmap v1034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2414825092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:31.610+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:31.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.841 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.842 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.842 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.843 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.863 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.863 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.865 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:31 compute-2 nova_compute[226433]: 2026-01-22 14:02:31.865 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:02:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:32.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:32 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:32.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:33 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:33 compute-2 ceph-mon[77081]: pgmap v1035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3545890113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:33.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:33 compute-2 sudo[230802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:33 compute-2 sudo[230802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:33 compute-2 sudo[230802]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:33 compute-2 sudo[230827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:02:33 compute-2 sudo[230827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:33 compute-2 sudo[230827]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:33 compute-2 sudo[230852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:33 compute-2 sudo[230852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:33 compute-2 sudo[230852]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:33.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:33 compute-2 sudo[230877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:02:33 compute-2 sudo[230877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:34.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:34 compute-2 sudo[230877]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:34.599+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:35 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:35 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1840054656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:02:35 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:35 compute-2 podman[230934]: 2026-01-22 14:02:35.061276109 +0000 UTC m=+0.109807306 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:02:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:35.620+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:35.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:36 compute-2 ceph-mon[77081]: pgmap v1036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:36 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:36.635+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:37.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:37 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:37 compute-2 ceph-mon[77081]: pgmap v1037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:38.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:38.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:39 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:39 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:02:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:02:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:02:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:02:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:02:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:39.673+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:40 compute-2 ceph-mon[77081]: pgmap v1038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:40 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:40 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:40.706+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:41.711+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:02:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:41.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:02:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:42.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:42 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:42 compute-2 ceph-mon[77081]: pgmap v1039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:42.694+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:43 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:43 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:43 compute-2 ceph-mon[77081]: pgmap v1040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:43.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:44 compute-2 sudo[230956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:44 compute-2 sudo[230956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:44 compute-2 sudo[230956]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:44 compute-2 sudo[230981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:44 compute-2 sudo[230981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:44 compute-2 sudo[230981]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:44 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:44 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:44.719+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:45 compute-2 ceph-mon[77081]: pgmap v1041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:45 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:45 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:02:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:45.670+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:45 compute-2 sudo[231007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:02:45 compute-2 sudo[231007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:45 compute-2 sudo[231007]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:45.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:45 compute-2 sudo[231032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:02:45 compute-2 sudo[231032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:02:45 compute-2 sudo[231032]: pam_unix(sudo:session): session closed for user root
Jan 22 14:02:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:46 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:46.655+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:02:47.171 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:02:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:02:47.171 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:02:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:02:47.171 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:02:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:47.628+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:47 compute-2 ceph-mon[77081]: pgmap v1042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:47 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:02:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:47.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:02:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:48.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:48.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:49 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:49.534+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:02:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:49.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:02:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:02:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:50.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:02:50 compute-2 ceph-mon[77081]: pgmap v1043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:50 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:50 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:50.553+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:51 compute-2 ceph-mon[77081]: pgmap v1044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:51.533+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:51.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:52.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:52.578+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:53 compute-2 podman[231061]: 2026-01-22 14:02:53.004384341 +0000 UTC m=+0.071350072 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:02:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:53 compute-2 ceph-mon[77081]: pgmap v1045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:53.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:53.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:02:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:54.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:02:54 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:54.591+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:55 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:55 compute-2 ceph-mon[77081]: pgmap v1046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:55 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:02:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:55.632+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:02:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:02:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:02:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:56.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:02:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:56.664+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:57.709+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:57.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:02:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:02:58.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:02:58 compute-2 ceph-mon[77081]: pgmap v1047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:02:58 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:58.744+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:02:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:02:59.778+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:02:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:02:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:02:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:02:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:02:59 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:02:59 compute-2 ceph-mon[77081]: pgmap v1048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:00.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:00.801+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:01 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:01.786+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:01.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:02 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:02 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:02 compute-2 ceph-mon[77081]: pgmap v1049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:02 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:02.791+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-2 ceph-mon[77081]: pgmap v1050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:03.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:04 compute-2 sudo[231092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:04 compute-2 sudo[231092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:04 compute-2 sudo[231092]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:04 compute-2 sudo[231117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:04 compute-2 sudo[231117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:04 compute-2 sudo[231117]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:04.800+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:05 compute-2 ceph-mon[77081]: pgmap v1051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:05 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:05.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:05 compute-2 podman[231143]: 2026-01-22 14:03:05.987295479 +0000 UTC m=+0.049869836 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:03:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:06.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:06.745+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:06 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:07.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:07 compute-2 ceph-mon[77081]: pgmap v1052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:07 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:08 compute-2 sshd-session[231164]: Invalid user solv from 45.148.10.240 port 49556
Jan 22 14:03:08 compute-2 sshd-session[231164]: Connection closed by invalid user solv 45.148.10.240 port 49556 [preauth]
Jan 22 14:03:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:08.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:08.737+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:08 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:09.742+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:09 compute-2 ceph-mon[77081]: pgmap v1053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:09 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:09 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1579 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:09.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:10.751+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:10 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:11.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:11 compute-2 ceph-mon[77081]: pgmap v1054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:11 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:12.772+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:13 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:13 compute-2 ceph-mon[77081]: pgmap v1055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:13.728+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:14.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:14.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:15 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:15.697+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:15 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:15 compute-2 ceph-mon[77081]: pgmap v1056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:15 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:15 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:16.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:16.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:16 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:16 compute-2 ceph-mon[77081]: pgmap v1057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:17.705+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:17 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:18.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:18.723+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:18 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/197283772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:03:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/197283772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:03:18 compute-2 ceph-mon[77081]: pgmap v1058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:18 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Jan 22 14:03:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:18.993379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:03:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Jan 22 14:03:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090598993435, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2510, "num_deletes": 251, "total_data_size": 5105737, "memory_usage": 5172648, "flush_reason": "Manual Compaction"}
Jan 22 14:03:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599019797, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3322743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26652, "largest_seqno": 29156, "table_properties": {"data_size": 3313165, "index_size": 5624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 24372, "raw_average_key_size": 21, "raw_value_size": 3291995, "raw_average_value_size": 2910, "num_data_blocks": 246, "num_entries": 1131, "num_filter_entries": 1131, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090421, "oldest_key_time": 1769090421, "file_creation_time": 1769090598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 26472 microseconds, and 7024 cpu microseconds.
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.019857) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3322743 bytes OK
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.019879) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.021612) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.021629) EVENT_LOG_v1 {"time_micros": 1769090599021624, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.021648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5094300, prev total WAL file size 5094300, number of live WAL files 2.
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.022971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3244KB)], [54(7129KB)]
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599022995, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 10623856, "oldest_snapshot_seqno": -1}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6644 keys, 8912650 bytes, temperature: kUnknown
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599079876, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 8912650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8871697, "index_size": 23241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 174045, "raw_average_key_size": 26, "raw_value_size": 8753829, "raw_average_value_size": 1317, "num_data_blocks": 917, "num_entries": 6644, "num_filter_entries": 6644, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090599, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.081344) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8912650 bytes
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.082668) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.6 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 7164, records dropped: 520 output_compression: NoCompression
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.082694) EVENT_LOG_v1 {"time_micros": 1769090599082684, "job": 32, "event": "compaction_finished", "compaction_time_micros": 56935, "compaction_time_cpu_micros": 21017, "output_level": 6, "num_output_files": 1, "total_output_size": 8912650, "num_input_records": 7164, "num_output_records": 6644, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599083345, "job": 32, "event": "table_file_deletion", "file_number": 56}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090599084837, "job": 32, "event": "table_file_deletion", "file_number": 54}
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.022868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.084868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.084872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.084874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.084876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:03:19.084878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:03:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:19.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:20 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:20 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:20.733+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:21 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:21 compute-2 ceph-mon[77081]: pgmap v1059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:21.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:22 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:22 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:22.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:23.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:23 compute-2 ceph-mon[77081]: pgmap v1060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:23 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:23.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:24 compute-2 podman[231174]: 2026-01-22 14:03:24.035723605 +0000 UTC m=+0.093845955 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:03:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:24.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:24 compute-2 sudo[231201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:24 compute-2 sudo[231201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:24 compute-2 sudo[231201]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:24.648+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:24 compute-2 sudo[231226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:24 compute-2 sudo[231226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:24 compute-2 sudo[231226]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:25 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:25 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:25.695+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:26.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:26 compute-2 ceph-mon[77081]: pgmap v1061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:26 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:26.737+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:27 compute-2 nova_compute[226433]: 2026-01-22 14:03:27.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:27 compute-2 ceph-mon[77081]: pgmap v1062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:27.702+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:27.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:28.657+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:28 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:29 compute-2 nova_compute[226433]: 2026-01-22 14:03:29.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:29.627+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:29 compute-2 nova_compute[226433]: 2026-01-22 14:03:29.777 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:03:29 compute-2 nova_compute[226433]: 2026-01-22 14:03:29.777 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:03:29 compute-2 nova_compute[226433]: 2026-01-22 14:03:29.778 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:03:29 compute-2 nova_compute[226433]: 2026-01-22 14:03:29.778 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:03:29 compute-2 nova_compute[226433]: 2026-01-22 14:03:29.778 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:03:29 compute-2 ceph-mon[77081]: pgmap v1063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:29 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:29 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:03:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/393160789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.208 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:03:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:30.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.362 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.364 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5211MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.364 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.364 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.485 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.485 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.486 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.539 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:03:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:30.614+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:30 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/393160789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:03:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3818535632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.971 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.976 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:03:30 compute-2 nova_compute[226433]: 2026-01-22 14:03:30.998 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.000 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.000 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:03:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:31.566+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:31 compute-2 ceph-mon[77081]: pgmap v1064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:31 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3818535632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:31.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.995 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.995 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.996 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.996 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.996 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:31 compute-2 nova_compute[226433]: 2026-01-22 14:03:31.997 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:03:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:32.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:32 compute-2 nova_compute[226433]: 2026-01-22 14:03:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:32 compute-2 nova_compute[226433]: 2026-01-22 14:03:32.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:03:32 compute-2 nova_compute[226433]: 2026-01-22 14:03:32.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:03:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:32.541+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:32 compute-2 nova_compute[226433]: 2026-01-22 14:03:32.548 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:03:32 compute-2 nova_compute[226433]: 2026-01-22 14:03:32.548 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:03:32 compute-2 nova_compute[226433]: 2026-01-22 14:03:32.548 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:32 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:33.542+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:33 compute-2 ceph-mon[77081]: pgmap v1065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:33 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/364104935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:34.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:34.499+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:34 compute-2 nova_compute[226433]: 2026-01-22 14:03:34.543 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:03:34 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:34 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2280181550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:03:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:35.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:35 compute-2 ceph-mon[77081]: pgmap v1066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:35 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:35.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:36 compute-2 rsyslogd[1002]: imjournal: 4930 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 22 14:03:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:36.453+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:36 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:36 compute-2 ceph-mon[77081]: pgmap v1067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:36 compute-2 podman[231301]: 2026-01-22 14:03:36.988125755 +0000 UTC m=+0.050687167 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 14:03:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:37.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:37 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:38.534+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:39 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:39 compute-2 ceph-mon[77081]: pgmap v1068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:39.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:39.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:40.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:40 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:40 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:40.506+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:41 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:41 compute-2 ceph-mon[77081]: pgmap v1069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:41.475+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:41.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:42.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:42.464+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:42 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:42 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:43.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:43 compute-2 ceph-mon[77081]: pgmap v1070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:43 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:43.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:44.461+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:44 compute-2 sudo[231324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:44 compute-2 sudo[231324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:44 compute-2 sudo[231324]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:44 compute-2 sudo[231349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:44 compute-2 sudo[231349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:44 compute-2 sudo[231349]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:45 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:45 compute-2 ceph-mon[77081]: pgmap v1071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:45 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:45.445+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:45 compute-2 sudo[231374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:45 compute-2 sudo[231374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:45.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:45 compute-2 sudo[231374]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:46 compute-2 sudo[231399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:03:46 compute-2 sudo[231399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:46 compute-2 sudo[231399]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:46 compute-2 sudo[231424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:46 compute-2 sudo[231424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:46 compute-2 sudo[231424]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:46 compute-2 sudo[231449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:03:46 compute-2 sudo[231449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:46 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:46.417+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:46 compute-2 sudo[231449]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:03:47.172 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:03:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:03:47.172 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:03:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:03:47.172 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:03:47 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:47 compute-2 ceph-mon[77081]: pgmap v1072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:47.371+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:48 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:48 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:48.414+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:49 compute-2 ceph-mon[77081]: pgmap v1073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:49 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:03:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:03:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:49.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:50.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:50 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:51.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:51 compute-2 ceph-mon[77081]: pgmap v1074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:52.434+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:53.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:53.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:53 compute-2 ceph-mon[77081]: pgmap v1075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:54.421+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:55 compute-2 podman[231510]: 2026-01-22 14:03:55.031059679 +0000 UTC m=+0.092444408 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 14:03:55 compute-2 ceph-mon[77081]: pgmap v1076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:55 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:55.437+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:03:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:55.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:03:56 compute-2 sudo[231537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:03:56 compute-2 sudo[231537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:56 compute-2 sudo[231537]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:56 compute-2 sudo[231562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:03:56 compute-2 sudo[231562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:03:56 compute-2 sudo[231562]: pam_unix(sudo:session): session closed for user root
Jan 22 14:03:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:56.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:56.481+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:03:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:57.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:57 compute-2 ceph-mon[77081]: pgmap v1077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:57 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:03:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:03:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:03:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:58.480+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:58 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:03:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:59.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:03:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:59 compute-2 ceph-mon[77081]: pgmap v1078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:03:59 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:03:59 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:03:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:03:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:03:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:00.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:00.440+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:00 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:01.455+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:01 compute-2 ceph-mon[77081]: pgmap v1079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:01 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:02.469+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:03 compute-2 ceph-mon[77081]: pgmap v1080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:03.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:04 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:04:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:04.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:04:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:04.402+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:04 compute-2 sudo[231592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:04 compute-2 sudo[231592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:04 compute-2 sudo[231592]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:04 compute-2 sudo[231617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:04 compute-2 sudo[231617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:04 compute-2 sudo[231617]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:05 compute-2 ceph-mon[77081]: pgmap v1081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:05 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:05.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:05.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:06.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:06.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:06 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:07.453+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:07 compute-2 ceph-mon[77081]: pgmap v1082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:07 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:07.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:08 compute-2 podman[231643]: 2026-01-22 14:04:08.006824766 +0000 UTC m=+0.065288852 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:04:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:08.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:08.423+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:09.416+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:09 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:04:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:04:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:10.437+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:10 compute-2 ceph-mon[77081]: pgmap v1083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:10 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:10 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:10 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:11.480+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:11 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:04:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:12 compute-2 ceph-mon[77081]: pgmap v1084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:12 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:12.480+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:13 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:13 compute-2 ceph-mon[77081]: pgmap v1085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:13.497+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:14.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:14.476+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:14 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:14 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:15.458+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:04:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:04:16 compute-2 ceph-mon[77081]: pgmap v1086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:16 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:16 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:16.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:17 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:17 compute-2 ceph-mon[77081]: pgmap v1087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:17.459+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:18.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:18.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:18 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:04:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1206611799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:04:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:18.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:19 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:19 compute-2 ceph-mon[77081]: pgmap v1088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:19.528+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:04:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:20.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:04:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:20 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:20 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:20.558+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:21.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:21 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:21 compute-2 ceph-mon[77081]: pgmap v1089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:21 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:22.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:22.539+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:22 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:23.587+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:23 compute-2 ceph-mon[77081]: pgmap v1090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:23 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:24.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:24.554+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:24 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:24 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:25 compute-2 sudo[231673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:25 compute-2 sudo[231673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:25 compute-2 sudo[231673]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:25 compute-2 sudo[231704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:25 compute-2 sudo[231704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:25 compute-2 sudo[231704]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:25 compute-2 podman[231697]: 2026-01-22 14:04:25.277337981 +0000 UTC m=+0.087865443 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:04:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:25.589+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:25 compute-2 ceph-mon[77081]: pgmap v1091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:25 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:26.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:26.634+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:27 compute-2 ceph-mon[77081]: pgmap v1092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:27.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:28 compute-2 sshd-session[231751]: Invalid user jito from 92.118.39.95 port 53668
Jan 22 14:04:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:28.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:28 compute-2 sshd-session[231751]: Connection closed by invalid user jito 92.118.39.95 port 53668 [preauth]
Jan 22 14:04:28 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:28 compute-2 nova_compute[226433]: 2026-01-22 14:04:28.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:28.596+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:29 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:29 compute-2 ceph-mon[77081]: pgmap v1093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:29.605+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:30 compute-2 nova_compute[226433]: 2026-01-22 14:04:30.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:30 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:30 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:30.627+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.551 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.551 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.551 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.552 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:04:31 compute-2 nova_compute[226433]: 2026-01-22 14:04:31.552 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:04:31 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:31 compute-2 ceph-mon[77081]: pgmap v1094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:31 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:31.626+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:04:31 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4110622290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.007 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:04:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.174 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.175 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5205MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.175 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.175 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.271 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.272 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.273 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:04:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.314 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:04:32 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:32 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4110622290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:32.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:04:32 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1738843088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.728 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.733 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.768 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.770 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:04:32 compute-2 nova_compute[226433]: 2026-01-22 14:04:32.770 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:04:33 compute-2 ceph-mon[77081]: pgmap v1095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:33 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1738843088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:33.702+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.767 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.767 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.768 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.768 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.788 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.789 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.789 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.789 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.790 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:04:33 compute-2 nova_compute[226433]: 2026-01-22 14:04:33.790 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:04:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:34.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:34.698+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:34 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/835375088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2981244524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:04:34 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:35.707+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:35 compute-2 ceph-mon[77081]: pgmap v1096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:35 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:36.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:36.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:36.670+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:36 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:37.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:38.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:38 compute-2 ceph-mon[77081]: pgmap v1097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:38 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:38.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:38.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:38 compute-2 podman[231803]: 2026-01-22 14:04:38.993378391 +0000 UTC m=+0.053998678 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:04:39 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:39 compute-2 ceph-mon[77081]: pgmap v1098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:39.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:40 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:40 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1669 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:40.624+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:41 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:41 compute-2 ceph-mon[77081]: pgmap v1099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:41.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:42 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:42 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:42.553+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:43 compute-2 ceph-mon[77081]: pgmap v1100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:43 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:43.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:44.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:44 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:44.585+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:45 compute-2 sudo[231825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:45 compute-2 sudo[231825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:45 compute-2 sudo[231825]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:45 compute-2 sudo[231850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:45 compute-2 sudo[231850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:45 compute-2 sudo[231850]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:45.589+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:45 compute-2 ceph-mon[77081]: pgmap v1101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:45 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:45 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:46.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:46.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:46.568+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:04:47.173 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:04:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:04:47.174 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:04:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:04:47.174 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:04:47 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:47 compute-2 ceph-mon[77081]: pgmap v1102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:47.570+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:04:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:48.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:04:48 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:48.604+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:49.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:49 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:49 compute-2 ceph-mon[77081]: pgmap v1103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:49 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:50.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:50.636+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:51 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:51.684+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:52 compute-2 ceph-mon[77081]: pgmap v1104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:52.666+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:53 compute-2 ceph-mon[77081]: pgmap v1105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:53.656+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:54 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:54.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:54.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:55 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:55 compute-2 ceph-mon[77081]: pgmap v1106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:55 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:04:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:55.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:56 compute-2 podman[231880]: 2026-01-22 14:04:56.049564977 +0000 UTC m=+0.112166115 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:04:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:56 compute-2 sudo[231906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:56 compute-2 sudo[231906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-2 sudo[231906]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-2 sudo[231931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:04:56 compute-2 sudo[231931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-2 sudo[231931]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:56 compute-2 sudo[231956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:56 compute-2 sudo[231956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-2 sudo[231956]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-2 sudo[231981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:04:56 compute-2 sudo[231981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:56.622+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:56 compute-2 sudo[231981]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:56 compute-2 sudo[232038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:56 compute-2 sudo[232038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:56 compute-2 sudo[232038]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-2 sudo[232063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:04:57 compute-2 sudo[232063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:57 compute-2 sudo[232063]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-2 sudo[232088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:04:57 compute-2 sudo[232088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:57 compute-2 sudo[232088]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-2 sudo[232113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 14:04:57 compute-2 sudo[232113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:04:57 compute-2 sudo[232113]: pam_unix(sudo:session): session closed for user root
Jan 22 14:04:57 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:57 compute-2 ceph-mon[77081]: pgmap v1107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:57 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:57.611+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:04:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:04:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:04:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:04:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:58.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:04:58 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:58.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:04:59 compute-2 ceph-mon[77081]: pgmap v1108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:04:59 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:04:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:04:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:59.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:04:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:00.557+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:00 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:00 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1689 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:01.559+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:01 compute-2 ceph-mon[77081]: pgmap v1109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:01 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:05:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:05:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:02.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:02.596+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:02 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:03 compute-2 ceph-mon[77081]: pgmap v1110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:03.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:04.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:04 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:04 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:05 compute-2 sudo[232161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:05 compute-2 sudo[232161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:05 compute-2 sudo[232161]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:05 compute-2 sudo[232186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:05 compute-2 sudo[232186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:05 compute-2 sudo[232186]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:05.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:05 compute-2 ceph-mon[77081]: pgmap v1111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:06.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:06.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:06 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:07.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:07 compute-2 ceph-mon[77081]: pgmap v1112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:07 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:05:07 compute-2 sudo[232212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:07 compute-2 sudo[232212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:07 compute-2 sudo[232212]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:07 compute-2 sudo[232237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:05:07 compute-2 sudo[232237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:07 compute-2 sudo[232237]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:08.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:08.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:08.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:09.659+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:10 compute-2 podman[232263]: 2026-01-22 14:05:10.008917916 +0000 UTC m=+0.067481763 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Jan 22 14:05:10 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:10.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:10.667+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:11 compute-2 ceph-mon[77081]: pgmap v1113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:11 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:11 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:11 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:11 compute-2 ceph-mon[77081]: pgmap v1114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:11.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:12 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:12.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:13 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:13 compute-2 ceph-mon[77081]: pgmap v1115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:13.678+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:14 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:14 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:14.721+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:15.756+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:15 compute-2 ceph-mon[77081]: pgmap v1116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:15 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:15 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:16.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:16 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:16.735+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:17.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:17 compute-2 ceph-mon[77081]: pgmap v1117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:17 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:18.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:18.781+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:19 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/508443213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:05:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/508443213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:05:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:19.784+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:20 compute-2 ceph-mon[77081]: pgmap v1118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:20 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:20 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:20.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:20.794+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:21 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:21 compute-2 ceph-mon[77081]: pgmap v1119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:21.747+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:22.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:22.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:22 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:23 compute-2 sshd-session[232290]: Invalid user solv from 45.148.10.240 port 57984
Jan 22 14:05:23 compute-2 sshd-session[232290]: Connection closed by invalid user solv 45.148.10.240 port 57984 [preauth]
Jan 22 14:05:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:23.712+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:23 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:23 compute-2 ceph-mon[77081]: pgmap v1120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:23 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:24.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:24.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:25 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:25 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:25 compute-2 sudo[232293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:25 compute-2 sudo[232293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:25 compute-2 sudo[232293]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:25 compute-2 sudo[232318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:25 compute-2 sudo[232318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:25 compute-2 sudo[232318]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:25.748+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:26 compute-2 ceph-mon[77081]: pgmap v1121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:26 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:26.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:27 compute-2 podman[232344]: 2026-01-22 14:05:27.048445139 +0000 UTC m=+0.109298695 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 14:05:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:27 compute-2 ceph-mon[77081]: pgmap v1122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:27.689+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:28 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:28.691+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:05:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5573 writes, 31K keys, 5573 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 5573 writes, 5573 syncs, 1.00 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1862 writes, 9404 keys, 1862 commit groups, 1.0 writes per commit group, ingest: 16.86 MB, 0.03 MB/s
                                           Interval WAL: 1862 writes, 1862 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     86.3      0.38              0.10        16    0.024       0      0       0.0       0.0
                                             L6      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8    130.7    109.5      1.15              0.35        15    0.077     86K   7953       0.0       0.0
                                            Sum      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.8     98.2    103.7      1.54              0.45        31    0.050     86K   7953       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6    115.0    116.2      0.45              0.18        10    0.045     33K   2588       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.7    109.5      1.15              0.35        15    0.077     86K   7953       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     87.1      0.38              0.10        15    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.16 GB write, 0.09 MB/s write, 0.15 GB read, 0.08 MB/s read, 1.5 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 14.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000127 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(752,13.68 MB,4.50034%) FilterBlock(31,253.67 KB,0.081489%) IndexBlock(31,393.67 KB,0.126462%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:05:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:29 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:29 compute-2 ceph-mon[77081]: pgmap v1123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:29.707+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:30.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:30 compute-2 nova_compute[226433]: 2026-01-22 14:05:30.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:30 compute-2 nova_compute[226433]: 2026-01-22 14:05:30.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:30.700+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:31.722+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.799 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.799 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.800 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.800 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:05:31 compute-2 nova_compute[226433]: 2026-01-22 14:05:31.800 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:05:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:32 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:32 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:32 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:32.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:32.702+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:32 compute-2 ceph-mon[77081]: pgmap v1124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:05:33 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/330811639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.326 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.471 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.472 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5224MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.472 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.472 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:05:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:33.682+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.832 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.832 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:05:33 compute-2 nova_compute[226433]: 2026-01-22 14:05:33.832 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:05:33 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-2 ceph-mon[77081]: pgmap v1125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:33 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/330811639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:34 compute-2 nova_compute[226433]: 2026-01-22 14:05:34.056 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:05:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:34.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:34.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:05:34 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3068488076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:34 compute-2 nova_compute[226433]: 2026-01-22 14:05:34.471 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:05:34 compute-2 nova_compute[226433]: 2026-01-22 14:05:34.477 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:05:34 compute-2 nova_compute[226433]: 2026-01-22 14:05:34.496 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:05:34 compute-2 nova_compute[226433]: 2026-01-22 14:05:34.498 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:05:34 compute-2 nova_compute[226433]: 2026-01-22 14:05:34.498 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:05:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:34.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:35 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1659360568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:35 compute-2 ceph-mon[77081]: pgmap v1126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3068488076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:35 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.497 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.498 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.498 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.498 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.618 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.618 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.619 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.619 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.619 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.620 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.620 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.620 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.646 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:05:35 compute-2 nova_compute[226433]: 2026-01-22 14:05:35.646 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:35.692+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:36.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:36.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:36 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1439210580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:05:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:36.645+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:37 compute-2 nova_compute[226433]: 2026-01-22 14:05:37.526 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:37 compute-2 nova_compute[226433]: 2026-01-22 14:05:37.527 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:05:37 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:37 compute-2 ceph-mon[77081]: pgmap v1127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:37 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:37.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:05:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:05:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:38.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:38 compute-2 nova_compute[226433]: 2026-01-22 14:05:38.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:38.686+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:38 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:39.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:39 compute-2 ceph-mon[77081]: pgmap v1128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:39 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:39 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:40.717+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:40 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:40 compute-2 podman[232422]: 2026-01-22 14:05:40.997405113 +0000 UTC m=+0.057885378 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 14:05:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:41.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:41 compute-2 ceph-mon[77081]: pgmap v1129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:41 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:42.664+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:43 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:43 compute-2 ceph-mon[77081]: pgmap v1130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:43.688+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:44.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:44 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:44.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:44.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:45 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:45 compute-2 ceph-mon[77081]: pgmap v1131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:45 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:45.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:45 compute-2 sudo[232444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:45 compute-2 sudo[232444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:45 compute-2 sudo[232444]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:45 compute-2 sudo[232469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:05:45 compute-2 sudo[232469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:05:45 compute-2 sudo[232469]: pam_unix(sudo:session): session closed for user root
Jan 22 14:05:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:46.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:46 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:46.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:46.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:05:47.174 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:05:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:05:47.175 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:05:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:05:47.175 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:05:47 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:47 compute-2 ceph-mon[77081]: pgmap v1132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:47.745+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:48.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:48 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:48 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:48.697+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:49 compute-2 ceph-mon[77081]: pgmap v1133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:49 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:49.714+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:50 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:50 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:50.715+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:50 compute-2 nova_compute[226433]: 2026-01-22 14:05:50.937 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:05:51 compute-2 nova_compute[226433]: 2026-01-22 14:05:51.022 226437 WARNING nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 22 14:05:51 compute-2 nova_compute[226433]: 2026-01-22 14:05:51.022 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid e0e74330-96df-479f-8baf-53fbd2ccba91 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 22 14:05:51 compute-2 nova_compute[226433]: 2026-01-22 14:05:51.023 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "e0e74330-96df-479f-8baf-53fbd2ccba91" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:05:51 compute-2 ceph-mon[77081]: pgmap v1134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:51.697+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:52.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:53 compute-2 ceph-mon[77081]: pgmap v1135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:53.692+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:05:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:05:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:54.673+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:54 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:54 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:05:55 compute-2 nova_compute[226433]: 2026-01-22 14:05:55.561 226437 DEBUG oslo_concurrency.lockutils [None req-4800287f-e66f-4013-8cd3-d4db81524aa2 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquiring lock "e0e74330-96df-479f-8baf-53fbd2ccba91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:05:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:55.707+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:55 compute-2 ceph-mon[77081]: pgmap v1136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:55 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:56.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:57.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:57 compute-2 ceph-mon[77081]: pgmap v1137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:57 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:05:58 compute-2 podman[232500]: 2026-01-22 14:05:58.007206191 +0000 UTC m=+0.069740831 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:05:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:05:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:05:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:05:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:58.802+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:58 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:05:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:59.789+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:05:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:59 compute-2 ceph-mon[77081]: pgmap v1138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:05:59 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:05:59 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:00.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:00.744+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:00 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:00 compute-2 ceph-mon[77081]: pgmap v1139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 14:06:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:01.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:01 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:02.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:02.681+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:03 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:03 compute-2 ceph-mon[77081]: pgmap v1140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:03.705+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:04.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:04.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:04 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:04 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:05.694+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:05 compute-2 ceph-mon[77081]: pgmap v1141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:05 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:05 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.830927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765831067, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2725, "num_deletes": 506, "total_data_size": 5080915, "memory_usage": 5160128, "flush_reason": "Manual Compaction"}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765864493, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3276667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29161, "largest_seqno": 31881, "table_properties": {"data_size": 3266581, "index_size": 5556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 27698, "raw_average_key_size": 20, "raw_value_size": 3242643, "raw_average_value_size": 2389, "num_data_blocks": 243, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090600, "oldest_key_time": 1769090600, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 33620 microseconds, and 11595 cpu microseconds.
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.864569) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3276667 bytes OK
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.864605) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.872539) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.872566) EVENT_LOG_v1 {"time_micros": 1769090765872560, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.872595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5067658, prev total WAL file size 5067658, number of live WAL files 2.
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3199KB)], [57(8703KB)]
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765874080, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 12189317, "oldest_snapshot_seqno": -1}
Jan 22 14:06:05 compute-2 sudo[232530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:05 compute-2 sudo[232530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:05 compute-2 sudo[232530]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6971 keys, 10360264 bytes, temperature: kUnknown
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765956477, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 10360264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10315908, "index_size": 25812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183819, "raw_average_key_size": 26, "raw_value_size": 10190725, "raw_average_value_size": 1461, "num_data_blocks": 1022, "num_entries": 6971, "num_filter_entries": 6971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.956936) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10360264 bytes
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.960011) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8001, records dropped: 1030 output_compression: NoCompression
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.960035) EVENT_LOG_v1 {"time_micros": 1769090765960022, "job": 34, "event": "compaction_finished", "compaction_time_micros": 82457, "compaction_time_cpu_micros": 24422, "output_level": 6, "num_output_files": 1, "total_output_size": 10360264, "num_input_records": 8001, "num_output_records": 6971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765960920, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765963007, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:05 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:06:06 compute-2 sudo[232555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:06 compute-2 sudo[232555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:06 compute-2 sudo[232555]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:06.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:06.731+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:06 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:07.713+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:07 compute-2 ceph-mon[77081]: pgmap v1142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:07 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:08 compute-2 sudo[232581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:08 compute-2 sudo[232581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-2 sudo[232581]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-2 sudo[232606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:06:08 compute-2 sudo[232606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-2 sudo[232606]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-2 sudo[232631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:08 compute-2 sudo[232631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-2 sudo[232631]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:08 compute-2 sudo[232656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:06:08 compute-2 sudo[232656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:08 compute-2 sudo[232656]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:08.728+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:08 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:09.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:10 compute-2 ceph-mon[77081]: pgmap v1143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:10 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:06:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:06:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:06:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:06:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:06:10 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1759 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:10.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:11 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:11 compute-2 ceph-mon[77081]: pgmap v1144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:06:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:11.691+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:12 compute-2 podman[232713]: 2026-01-22 14:06:12.018236321 +0000 UTC m=+0.085136577 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:06:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:12 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:12.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:13.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:14.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:14 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:14 compute-2 ceph-mon[77081]: pgmap v1145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 14:06:14 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:14.764+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:15 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:15 compute-2 ceph-mon[77081]: pgmap v1146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:15 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:15.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:16.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:16 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:16.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:16.765+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:16 compute-2 sudo[232735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:16 compute-2 sudo[232735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:16 compute-2 sudo[232735]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:16 compute-2 sudo[232760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:06:16 compute-2 sudo[232760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:16 compute-2 sudo[232760]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:17.727+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:17 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:06:17 compute-2 ceph-mon[77081]: pgmap v1147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:18.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:18.679+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:18 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:18 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:06:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2515016526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:06:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:19.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:20 compute-2 ceph-mon[77081]: pgmap v1148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:20 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:20 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:20.652+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:21 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:21 compute-2 ceph-mon[77081]: pgmap v1149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:21.624+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:22.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:22 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:22 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:23.556+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:24 compute-2 ceph-mon[77081]: pgmap v1150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:24 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:24.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:24.545+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:25 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:25 compute-2 ceph-mon[77081]: pgmap v1151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:25 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:25.525+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:26 compute-2 sudo[232789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:26 compute-2 sudo[232789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:26 compute-2 sudo[232789]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:26 compute-2 sudo[232814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:26 compute-2 sudo[232814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:26 compute-2 sudo[232814]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:26 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:26.569+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:27.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:27 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:27 compute-2 ceph-mon[77081]: pgmap v1152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:28.610+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:06:28 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:29 compute-2 podman[232841]: 2026-01-22 14:06:29.004002492 +0000 UTC m=+0.071161917 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:06:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:29.586+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:30 compute-2 ceph-mon[77081]: pgmap v1153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:30 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1779 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:30 compute-2 nova_compute[226433]: 2026-01-22 14:06:30.602 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:30.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:06:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.5 total, 600.0 interval
                                           Cumulative writes: 5911 writes, 24K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5911 writes, 1112 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 722 writes, 1627 keys, 722 commit groups, 1.0 writes per commit group, ingest: 1.08 MB, 0.00 MB/s
                                           Interval WAL: 722 writes, 316 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:06:31 compute-2 nova_compute[226433]: 2026-01-22 14:06:31.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:31 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:31 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:31 compute-2 ceph-mon[77081]: pgmap v1154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:31.634+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:32 compute-2 nova_compute[226433]: 2026-01-22 14:06:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:32.590+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:32 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:32 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:33 compute-2 nova_compute[226433]: 2026-01-22 14:06:33.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:33 compute-2 nova_compute[226433]: 2026-01-22 14:06:33.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:33.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:34 compute-2 nova_compute[226433]: 2026-01-22 14:06:34.466 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:06:34 compute-2 nova_compute[226433]: 2026-01-22 14:06:34.467 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:06:34 compute-2 nova_compute[226433]: 2026-01-22 14:06:34.467 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:06:34 compute-2 nova_compute[226433]: 2026-01-22 14:06:34.468 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:06:34 compute-2 nova_compute[226433]: 2026-01-22 14:06:34.468 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:06:34 compute-2 ceph-mon[77081]: pgmap v1155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:34 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:34.611+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:06:35 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/539268978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.204 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.389 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.391 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5188MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.391 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.391 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:06:35 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:35 compute-2 ceph-mon[77081]: pgmap v1156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:35 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3190855873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/539268978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.547 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.568 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.603 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.604 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.622 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:06:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:35.630+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.662 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:06:35 compute-2 nova_compute[226433]: 2026-01-22 14:06:35.708 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:06:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:36.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:06:36 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1569454389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:36 compute-2 nova_compute[226433]: 2026-01-22 14:06:36.436 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:06:36 compute-2 nova_compute[226433]: 2026-01-22 14:06:36.446 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:06:36 compute-2 nova_compute[226433]: 2026-01-22 14:06:36.471 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:06:36 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/478780440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1569454389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:06:36 compute-2 nova_compute[226433]: 2026-01-22 14:06:36.633 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:06:36 compute-2 nova_compute[226433]: 2026-01-22 14:06:36.633 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:06:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:36.667+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:37.620+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.633 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.633 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.634 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.634 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.675 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.675 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.676 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.676 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:06:37 compute-2 nova_compute[226433]: 2026-01-22 14:06:37.676 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:06:37 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:37 compute-2 ceph-mon[77081]: pgmap v1157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:37 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:38.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:38 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:39.609+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:39 compute-2 ceph-mon[77081]: pgmap v1158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:39 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:39 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:40 compute-2 sshd-session[232916]: Invalid user jito from 92.118.39.95 port 60816
Jan 22 14:06:40 compute-2 sshd-session[232916]: Connection closed by invalid user jito 92.118.39.95 port 60816 [preauth]
Jan 22 14:06:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:40.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:40.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:40.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:40 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:41.633+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:42 compute-2 ceph-mon[77081]: pgmap v1159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:42 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:42.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:42.648+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:42 compute-2 podman[232920]: 2026-01-22 14:06:42.991244584 +0000 UTC m=+0.052336462 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 14:06:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:43.698+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:43 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:43 compute-2 ceph-mon[77081]: pgmap v1160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:44.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:44.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:44.720+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:45 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:45 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:45 compute-2 ceph-mon[77081]: pgmap v1161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:45 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:45.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:46 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:46 compute-2 sudo[232940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:46 compute-2 sudo[232940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:46 compute-2 sudo[232940]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:46 compute-2 sudo[232965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:06:46 compute-2 sudo[232965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:06:46 compute-2 sudo[232965]: pam_unix(sudo:session): session closed for user root
Jan 22 14:06:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:06:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:06:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:46.729+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:06:47.175 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:06:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:06:47.176 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:06:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:06:47.176 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:06:47 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:06:47 compute-2 ceph-mon[77081]: pgmap v1162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:47.774+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:48.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:48 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:48 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:48.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:49.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:50 compute-2 ceph-mon[77081]: pgmap v1163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:50 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:06:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:50.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:06:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:50.781+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:50 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:50 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:51.811+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:06:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:52.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:06:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:52 compute-2 ceph-mon[77081]: pgmap v1164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:52 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:52.778+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:53 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:53 compute-2 ceph-mon[77081]: pgmap v1165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:53 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:53.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:54.734+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:54 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:54 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:06:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:06:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:55.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:55 compute-2 ceph-mon[77081]: pgmap v1166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:55 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:56.762+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:56 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:57.811+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:58 compute-2 ceph-mon[77081]: pgmap v1167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:06:58 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 22 14:06:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 22 14:06:58 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 14:06:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:06:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:06:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:58.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:06:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:58.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:06:59 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:06:59 compute-2 ceph-mon[77081]: pgmap v1168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 14:06:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:59.815+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:06:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:00 compute-2 podman[232997]: 2026-01-22 14:07:00.078035448 +0000 UTC m=+0.128524561 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:07:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:00 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:07:00 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:00.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:07:01 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:01 compute-2 ceph-mon[77081]: pgmap v1169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 14:07:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:01.867+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:02 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:07:02 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:02.887+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:03 compute-2 ceph-mon[77081]: pgmap v1170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 22 14:07:03 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:03.900+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:04 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:04 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:04.938+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:05 compute-2 ceph-mon[77081]: pgmap v1171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 0 B/s wr, 87 op/s
Jan 22 14:07:05 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:05.908+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:06 compute-2 sudo[233027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:06 compute-2 sudo[233027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:06 compute-2 sudo[233027]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:06 compute-2 sudo[233052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:06 compute-2 sudo[233052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:06 compute-2 sudo[233052]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:06.941+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:07 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:07.925+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:08 compute-2 ceph-mon[77081]: pgmap v1172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:07:08 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:08.879+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:09 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:09 compute-2 ceph-mon[77081]: pgmap v1173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:07:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:09.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:10.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:10 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:10 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:10.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:11 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:11 compute-2 ceph-mon[77081]: pgmap v1174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Jan 22 14:07:11 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:11.815+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 14:07:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 14:07:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:12.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:12 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:12.789+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:13 compute-2 ceph-mon[77081]: pgmap v1175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Jan 22 14:07:13 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:13.826+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:14 compute-2 podman[233081]: 2026-01-22 14:07:14.017736067 +0000 UTC m=+0.075008990 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:07:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:07:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:07:14 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:14.813+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:15 compute-2 ceph-mon[77081]: pgmap v1176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Jan 22 14:07:15 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:15 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:15.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:16.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:16 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:07:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:16.822+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:16 compute-2 sudo[233102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:16 compute-2 sudo[233102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:16 compute-2 sudo[233102]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-2 sudo[233127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:07:17 compute-2 sudo[233127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:17 compute-2 sudo[233127]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-2 sudo[233152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:17 compute-2 sudo[233152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:17 compute-2 sudo[233152]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-2 sudo[233177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:07:17 compute-2 sudo[233177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:17 compute-2 sudo[233177]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:17.840+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:18 compute-2 ceph-mon[77081]: pgmap v1177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 14:07:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:18 compute-2 sudo[233223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:18 compute-2 sudo[233223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:18 compute-2 sudo[233223]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:18.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:18 compute-2 sudo[233248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:07:18 compute-2 sudo[233248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:18 compute-2 sudo[233248]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:18 compute-2 sudo[233273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:18 compute-2 sudo[233273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:18 compute-2 sudo[233273]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:18 compute-2 sudo[233298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:07:18 compute-2 sudo[233298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:18.799+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:07:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:07:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:19 compute-2 ceph-mon[77081]: pgmap v1178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:19 compute-2 podman[233398]: 2026-01-22 14:07:19.336995834 +0000 UTC m=+0.057293003 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:07:19 compute-2 podman[233398]: 2026-01-22 14:07:19.429668578 +0000 UTC m=+0.149965727 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:07:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:19.750+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:20 compute-2 podman[233551]: 2026-01-22 14:07:20.08781249 +0000 UTC m=+0.056715717 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:07:20 compute-2 podman[233551]: 2026-01-22 14:07:20.104742366 +0000 UTC m=+0.073645563 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:07:20 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:20 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:20.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:20 compute-2 podman[233619]: 2026-01-22 14:07:20.339454878 +0000 UTC m=+0.049524527 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Jan 22 14:07:20 compute-2 podman[233619]: 2026-01-22 14:07:20.354577157 +0000 UTC m=+0.064646816 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64)
Jan 22 14:07:20 compute-2 sudo[233298]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:20.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:20 compute-2 sudo[233649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:20 compute-2 sudo[233649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:20 compute-2 sudo[233649]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:20 compute-2 sudo[233674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:07:20 compute-2 sudo[233674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:20 compute-2 sudo[233674]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:20 compute-2 sudo[233700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:20 compute-2 sudo[233700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:20 compute-2 sudo[233700]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:20.711+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:20 compute-2 sudo[233725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:07:20 compute-2 sudo[233725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:21 compute-2 sudo[233725]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:21 compute-2 ceph-mon[77081]: pgmap v1179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:07:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:07:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:21.733+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:22.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:22 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:22.717+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:23 compute-2 ceph-mon[77081]: pgmap v1180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:23.734+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:24.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:24.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:24.762+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:25 compute-2 ceph-mon[77081]: pgmap v1181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:25 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:25.736+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:26.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:26.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:26 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:26 compute-2 sudo[233784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:26 compute-2 sudo[233784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:26 compute-2 sudo[233784]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:26 compute-2 sudo[233809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:26 compute-2 sudo[233809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:26.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:26 compute-2 sudo[233809]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:27 compute-2 ceph-mon[77081]: pgmap v1182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:27.703+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:28.663+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:29 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:07:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:29.669+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:29 compute-2 sudo[233835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:29 compute-2 sudo[233835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:29 compute-2 sudo[233835]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:29 compute-2 sudo[233860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:07:29 compute-2 sudo[233860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:29 compute-2 sudo[233860]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:30.680+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:30 compute-2 ceph-mon[77081]: pgmap v1183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:30 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:31 compute-2 podman[233886]: 2026-01-22 14:07:31.016607021 +0000 UTC m=+0.080172499 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 14:07:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:31 compute-2 nova_compute[226433]: 2026-01-22 14:07:31.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:31.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:32 compute-2 nova_compute[226433]: 2026-01-22 14:07:32.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:32.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:32 compute-2 ceph-mon[77081]: pgmap v1184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:32 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:33 compute-2 nova_compute[226433]: 2026-01-22 14:07:33.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:33 compute-2 nova_compute[226433]: 2026-01-22 14:07:33.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:33.777+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/113320093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:34.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:34.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:34 compute-2 ceph-mon[77081]: pgmap v1185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:34 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/438682208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:35 compute-2 nova_compute[226433]: 2026-01-22 14:07:35.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:35 compute-2 nova_compute[226433]: 2026-01-22 14:07:35.614 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:07:35 compute-2 nova_compute[226433]: 2026-01-22 14:07:35.615 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:07:35 compute-2 nova_compute[226433]: 2026-01-22 14:07:35.615 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:07:35 compute-2 nova_compute[226433]: 2026-01-22 14:07:35.616 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:07:35 compute-2 nova_compute[226433]: 2026-01-22 14:07:35.616 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:07:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:35.846+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:35 compute-2 ceph-mon[77081]: pgmap v1186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:07:36 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3109525077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.088 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.279 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.281 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5200MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.281 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.281 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.391 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.460 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:07:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:36.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:36.827+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:07:36 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/237134865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.937 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.943 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.960 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.961 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:07:36 compute-2 nova_compute[226433]: 2026-01-22 14:07:36.962 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:07:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3109525077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/237134865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:07:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:37.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:37.783+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:37 compute-2 nova_compute[226433]: 2026-01-22 14:07:37.962 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:37 compute-2 nova_compute[226433]: 2026-01-22 14:07:37.963 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:37 compute-2 nova_compute[226433]: 2026-01-22 14:07:37.964 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:07:37 compute-2 nova_compute[226433]: 2026-01-22 14:07:37.964 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:07:37 compute-2 nova_compute[226433]: 2026-01-22 14:07:37.999 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:07:38 compute-2 nova_compute[226433]: 2026-01-22 14:07:37.999 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:07:38 compute-2 nova_compute[226433]: 2026-01-22 14:07:38.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:38 compute-2 nova_compute[226433]: 2026-01-22 14:07:38.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:38 compute-2 nova_compute[226433]: 2026-01-22 14:07:38.000 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:07:38 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:38 compute-2 ceph-mon[77081]: pgmap v1187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:38.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:38.799+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:39.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:39 compute-2 nova_compute[226433]: 2026-01-22 14:07:39.549 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:07:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:39.815+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:40.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:40 compute-2 ceph-mon[77081]: pgmap v1188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:40 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:40.854+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:41.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:41 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:41.886+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:41 compute-2 sshd-session[233962]: Invalid user validator from 45.148.10.240 port 59254
Jan 22 14:07:42 compute-2 sshd-session[233962]: Connection closed by invalid user validator 45.148.10.240 port 59254 [preauth]
Jan 22 14:07:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:42.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:42 compute-2 ceph-mon[77081]: pgmap v1189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:42 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:42.857+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:43.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:43 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:43.883+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:44.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:44 compute-2 ceph-mon[77081]: pgmap v1190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:44 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:44 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:44.907+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:44 compute-2 podman[233966]: 2026-01-22 14:07:44.994207057 +0000 UTC m=+0.054404862 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:07:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:45.907+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:45 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:46 compute-2 sudo[233986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:46 compute-2 sudo[233986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:46 compute-2 sudo[233986]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:46.871+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:46 compute-2 sudo[234011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:07:46 compute-2 sudo[234011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:07:46 compute-2 sudo[234011]: pam_unix(sudo:session): session closed for user root
Jan 22 14:07:46 compute-2 ceph-mon[77081]: pgmap v1191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:46 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:07:47.176 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:07:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:07:47.177 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:07:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:07:47.177 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:07:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:47.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:47.874+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:47 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:48.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:49 compute-2 ceph-mon[77081]: pgmap v1192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:49 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:49.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:49.931+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:50 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:50 compute-2 ceph-mon[77081]: pgmap v1193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:50 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:50.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:50.921+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:51 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:51.924+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:52 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:52 compute-2 ceph-mon[77081]: pgmap v1194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:52.914+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:53 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:53.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:53.901+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:54 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:54 compute-2 ceph-mon[77081]: pgmap v1195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:54.887+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:55 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:07:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:55.848+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:07:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:56.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:56 compute-2 ceph-mon[77081]: pgmap v1196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:56 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:56.856+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:57.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:57 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:57.808+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:07:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:58.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:07:58 compute-2 ceph-mon[77081]: pgmap v1197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:07:58 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:58.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:07:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:07:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:59.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:07:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:59.825+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:07:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:59 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:07:59 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:00.845+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:00 compute-2 ceph-mon[77081]: pgmap v1198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:00 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:01.798+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:02 compute-2 podman[234043]: 2026-01-22 14:08:02.049605964 +0000 UTC m=+0.114937414 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 14:08:02 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:02 compute-2 ceph-mon[77081]: pgmap v1199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:02.848+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:03.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:03 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:03.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:04 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:04 compute-2 ceph-mon[77081]: pgmap v1200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:04 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:04.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:05.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:05 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:05 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:05.904+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:06 compute-2 ceph-mon[77081]: pgmap v1201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:06 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:06.913+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:06 compute-2 sudo[234073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:06 compute-2 sudo[234073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:07 compute-2 sudo[234073]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:07 compute-2 sudo[234098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:07 compute-2 sudo[234098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:07 compute-2 sudo[234098]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:07.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:07.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:08.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:09 compute-2 ceph-mon[77081]: pgmap v1202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:09 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:09.929+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:10.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:10.883+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:11 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:11 compute-2 ceph-mon[77081]: pgmap v1203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:11 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:11.869+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:12 compute-2 ceph-mon[77081]: pgmap v1204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:12.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:12.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:13 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:13.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:13.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:14 compute-2 ceph-mon[77081]: pgmap v1205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:14.896+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:15.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:15 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:15 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:15.920+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:15 compute-2 podman[234127]: 2026-01-22 14:08:15.989463214 +0000 UTC m=+0.050759045 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 14:08:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:16.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:16 compute-2 ceph-mon[77081]: pgmap v1206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:16.940+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:17.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:17 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:17.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:18 compute-2 ceph-mon[77081]: pgmap v1207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:08:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/145215879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:08:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:19.000+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:19.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:19 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:20.040+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:20 compute-2 ceph-mon[77081]: pgmap v1208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:20 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:21.069+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:21.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:22.052+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:23.005+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:23 compute-2 ceph-mon[77081]: pgmap v1209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:24.006+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:24 compute-2 ceph-mon[77081]: pgmap v1210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:25.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:25 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:26.034+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:26 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:26 compute-2 ceph-mon[77081]: pgmap v1211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:26.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:27.036+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:27 compute-2 sudo[234154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:27 compute-2 sudo[234154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:27 compute-2 sudo[234154]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:27 compute-2 sudo[234179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:27 compute-2 sudo[234179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:27 compute-2 sudo[234179]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:28.017+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:28 compute-2 ceph-mon[77081]: pgmap v1212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:28.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:29.025+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:29 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:30.060+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:30.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:30 compute-2 ceph-mon[77081]: pgmap v1213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:30 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:30 compute-2 sudo[234205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:30 compute-2 sudo[234205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-2 sudo[234205]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:30 compute-2 sudo[234230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:08:30 compute-2 sudo[234230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-2 sudo[234230]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:30 compute-2 sudo[234255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:30 compute-2 sudo[234255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-2 sudo[234255]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:30 compute-2 sudo[234280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:08:30 compute-2 sudo[234280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:30.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:30 compute-2 sudo[234280]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:31.096+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:31 compute-2 sudo[234338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:31 compute-2 sudo[234338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:31 compute-2 sudo[234338]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:31 compute-2 sudo[234363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:08:31 compute-2 sudo[234363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:31 compute-2 sudo[234363]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:31 compute-2 sudo[234388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:31 compute-2 sudo[234388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:31 compute-2 sudo[234388]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:31 compute-2 sudo[234413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 14:08:31 compute-2 sudo[234413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.653756944 +0000 UTC m=+0.039536741 container create fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 14:08:31 compute-2 systemd[1]: Started libpod-conmon-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope.
Jan 22 14:08:31 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.635605307 +0000 UTC m=+0.021385124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.731627132 +0000 UTC m=+0.117406949 container init fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.738236805 +0000 UTC m=+0.124016603 container start fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.741223164 +0000 UTC m=+0.127002961 container attach fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:08:31 compute-2 systemd[1]: libpod-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope: Deactivated successfully.
Jan 22 14:08:31 compute-2 pensive_mccarthy[234495]: 167 167
Jan 22 14:08:31 compute-2 conmon[234495]: conmon fc517f37329d627da191 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope/container/memory.events
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.745175618 +0000 UTC m=+0.130955415 container died fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:08:31 compute-2 systemd[1]: var-lib-containers-storage-overlay-4eef4f06ca6b17b1ba01b4dd5148ff7cc37b70c682576d81bd21dc586105a325-merged.mount: Deactivated successfully.
Jan 22 14:08:31 compute-2 podman[234478]: 2026-01-22 14:08:31.786398132 +0000 UTC m=+0.172177959 container remove fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:08:31 compute-2 systemd[1]: libpod-conmon-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope: Deactivated successfully.
Jan 22 14:08:31 compute-2 podman[234518]: 2026-01-22 14:08:31.986083703 +0000 UTC m=+0.052642765 container create 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:08:32 compute-2 systemd[1]: Started libpod-conmon-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope.
Jan 22 14:08:32 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:08:32 compute-2 podman[234518]: 2026-01-22 14:08:31.96924953 +0000 UTC m=+0.035808622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:08:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 14:08:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 14:08:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 14:08:32 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 14:08:32 compute-2 podman[234518]: 2026-01-22 14:08:32.083244348 +0000 UTC m=+0.149803440 container init 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:08:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:32.087+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:32 compute-2 podman[234518]: 2026-01-22 14:08:32.090466608 +0000 UTC m=+0.157025700 container start 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 14:08:32 compute-2 podman[234518]: 2026-01-22 14:08:32.094968756 +0000 UTC m=+0.161527908 container attach 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 14:08:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:32.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:32 compute-2 podman[234537]: 2026-01-22 14:08:32.166519868 +0000 UTC m=+0.087968065 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:08:32 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:32 compute-2 ceph-mon[77081]: pgmap v1214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:32 compute-2 nova_compute[226433]: 2026-01-22 14:08:32.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:32 compute-2 nova_compute[226433]: 2026-01-22 14:08:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:32.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:33.083+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:33 compute-2 charming_cerf[234534]: [
Jan 22 14:08:33 compute-2 charming_cerf[234534]:     {
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "available": false,
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "ceph_device": false,
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "lsm_data": {},
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "lvs": [],
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "path": "/dev/sr0",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "rejected_reasons": [
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "Insufficient space (<5GB)",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "Has a FileSystem"
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         ],
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         "sys_api": {
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "actuators": null,
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "device_nodes": "sr0",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "devname": "sr0",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "human_readable_size": "482.00 KB",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "id_bus": "ata",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "model": "QEMU DVD-ROM",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "nr_requests": "2",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "parent": "/dev/sr0",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "partitions": {},
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "path": "/dev/sr0",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "removable": "1",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "rev": "2.5+",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "ro": "0",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "rotational": "1",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "sas_address": "",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "sas_device_handle": "",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "scheduler_mode": "mq-deadline",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "sectors": 0,
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "sectorsize": "2048",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "size": 493568.0,
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "support_discard": "2048",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "type": "disk",
Jan 22 14:08:33 compute-2 charming_cerf[234534]:             "vendor": "QEMU"
Jan 22 14:08:33 compute-2 charming_cerf[234534]:         }
Jan 22 14:08:33 compute-2 charming_cerf[234534]:     }
Jan 22 14:08:33 compute-2 charming_cerf[234534]: ]
Jan 22 14:08:33 compute-2 systemd[1]: libpod-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope: Deactivated successfully.
Jan 22 14:08:33 compute-2 podman[234518]: 2026-01-22 14:08:33.221330814 +0000 UTC m=+1.287889886 container died 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:08:33 compute-2 systemd[1]: libpod-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope: Consumed 1.139s CPU time.
Jan 22 14:08:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a-merged.mount: Deactivated successfully.
Jan 22 14:08:33 compute-2 podman[234518]: 2026-01-22 14:08:33.271631597 +0000 UTC m=+1.338190669 container remove 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 22 14:08:33 compute-2 systemd[1]: libpod-conmon-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope: Deactivated successfully.
Jan 22 14:08:33 compute-2 sudo[234413]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:34.128+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:34.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:08:34 compute-2 ceph-mon[77081]: pgmap v1215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:08:34 compute-2 nova_compute[226433]: 2026-01-22 14:08:34.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:34 compute-2 nova_compute[226433]: 2026-01-22 14:08:34.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:34.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:35.107+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1084370533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:35 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:36.059+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:36 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:36 compute-2 ceph-mon[77081]: pgmap v1216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2271794858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.532 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.532 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.533 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:36 compute-2 nova_compute[226433]: 2026-01-22 14:08:36.533 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:08:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:36.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:37.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.538 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.540 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.540 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.540 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.541 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:08:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:08:37 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3884516686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:37 compute-2 nova_compute[226433]: 2026-01-22 14:08:37.956 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:08:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:38.010+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.094 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.095 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5192MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.095 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.096 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:08:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:38.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.171 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.172 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.172 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.205 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:08:38 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:38 compute-2 ceph-mon[77081]: pgmap v1217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3884516686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:08:38 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1175285928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.660 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.665 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.685 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.686 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:08:38 compute-2 nova_compute[226433]: 2026-01-22 14:08:38.686 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:08:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:38.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:39 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1175285928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:08:39 compute-2 nova_compute[226433]: 2026-01-22 14:08:39.681 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:08:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:39.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.971729) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919971775, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2547, "num_deletes": 510, "total_data_size": 4639115, "memory_usage": 4704960, "flush_reason": "Manual Compaction"}
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919996515, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2305866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31886, "largest_seqno": 34428, "table_properties": {"data_size": 2297650, "index_size": 4006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26045, "raw_average_key_size": 20, "raw_value_size": 2276708, "raw_average_value_size": 1819, "num_data_blocks": 172, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090766, "oldest_key_time": 1769090766, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 24926 microseconds, and 9983 cpu microseconds.
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.996646) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2305866 bytes OK
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.996681) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.999072) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.999102) EVENT_LOG_v1 {"time_micros": 1769090919999092, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.999130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:08:39 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4626572, prev total WAL file size 4688192, number of live WAL files 2.
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.001438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2251KB)], [60(10117KB)]
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920001488, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12666130, "oldest_snapshot_seqno": -1}
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 7229 keys, 9297660 bytes, temperature: kUnknown
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920079631, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 9297660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9254164, "index_size": 24312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 191649, "raw_average_key_size": 26, "raw_value_size": 9126960, "raw_average_value_size": 1262, "num_data_blocks": 948, "num_entries": 7229, "num_filter_entries": 7229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090920, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.080012) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9297660 bytes
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.085461) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.8 rd, 118.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.0) OK, records in: 8222, records dropped: 993 output_compression: NoCompression
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.085493) EVENT_LOG_v1 {"time_micros": 1769090920085479, "job": 36, "event": "compaction_finished", "compaction_time_micros": 78278, "compaction_time_cpu_micros": 20842, "output_level": 6, "num_output_files": 1, "total_output_size": 9297660, "num_input_records": 8222, "num_output_records": 7229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920086488, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920090392, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.001277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:40 compute-2 sudo[235903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:40 compute-2 sudo[235903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:40 compute-2 sudo[235903]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:40 compute-2 sudo[235928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:08:40 compute-2 sudo[235928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:40 compute-2 sudo[235928]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:40 compute-2 ceph-mon[77081]: pgmap v1218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:40 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:08:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:40.961+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:41 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:41.986+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:42.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:42 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:42 compute-2 ceph-mon[77081]: pgmap v1219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:42.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:43 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:43 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:43.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:44.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:44.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:44 compute-2 ceph-mon[77081]: pgmap v1220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:44 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:44.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:45 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:45 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:45.858+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:46.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:46.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:46 compute-2 ceph-mon[77081]: pgmap v1221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:46 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:46.859+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:47 compute-2 podman[235957]: 2026-01-22 14:08:47.005281656 +0000 UTC m=+0.059516936 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 14:08:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:08:47.178 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:08:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:08:47.178 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:08:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:08:47.179 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:08:47 compute-2 sudo[235977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:47 compute-2 sudo[235977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:47 compute-2 sudo[235977]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:47 compute-2 sudo[236002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:08:47 compute-2 sudo[236002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:08:47 compute-2 sudo[236002]: pam_unix(sudo:session): session closed for user root
Jan 22 14:08:47 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:47.865+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:48.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:48 compute-2 sshd-session[236027]: Connection closed by authenticating user root 92.118.39.95 port 39816 [preauth]
Jan 22 14:08:48 compute-2 ceph-mon[77081]: pgmap v1222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:48 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:48.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:49.868+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:50 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:50 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:50.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:50.877+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:51 compute-2 ceph-mon[77081]: pgmap v1223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:51 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:51.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:52 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:52 compute-2 ceph-mon[77081]: pgmap v1224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:52.876+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:53 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.123328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933123491, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 432, "num_deletes": 251, "total_data_size": 408537, "memory_usage": 417752, "flush_reason": "Manual Compaction"}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933127349, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 268216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34433, "largest_seqno": 34860, "table_properties": {"data_size": 265866, "index_size": 450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6293, "raw_average_key_size": 19, "raw_value_size": 260953, "raw_average_value_size": 795, "num_data_blocks": 20, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090919, "oldest_key_time": 1769090919, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4046 microseconds, and 1219 cpu microseconds.
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127375) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 268216 bytes OK
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127388) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128906) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128920) EVENT_LOG_v1 {"time_micros": 1769090933128916, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 405790, prev total WAL file size 405790, number of live WAL files 2.
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.129281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(261KB)], [63(9079KB)]
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933129335, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 9565876, "oldest_snapshot_seqno": -1}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 7045 keys, 7847251 bytes, temperature: kUnknown
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933186372, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 7847251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7806160, "index_size": 22355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 188614, "raw_average_key_size": 26, "raw_value_size": 7683168, "raw_average_value_size": 1090, "num_data_blocks": 861, "num_entries": 7045, "num_filter_entries": 7045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.186617) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 7847251 bytes
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.188143) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.5 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.9) write-amplify(29.3) OK, records in: 7557, records dropped: 512 output_compression: NoCompression
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.188164) EVENT_LOG_v1 {"time_micros": 1769090933188154, "job": 38, "event": "compaction_finished", "compaction_time_micros": 57114, "compaction_time_cpu_micros": 20386, "output_level": 6, "num_output_files": 1, "total_output_size": 7847251, "num_input_records": 7557, "num_output_records": 7045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933188384, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933190282, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.129240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:08:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:53.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:54 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:54 compute-2 ceph-mon[77081]: pgmap v1225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:54.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:54.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:55 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:08:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:55.972+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:08:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:56 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:56 compute-2 ceph-mon[77081]: pgmap v1226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:08:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:56.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:08:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:57.016+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:57 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:57 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:58.041+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:08:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:58.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:08:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:08:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:08:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:58.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:08:58 compute-2 ceph-mon[77081]: pgmap v1227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:08:58 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:59.066+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:08:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:59 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:08:59 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:00.085+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:00.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:00 compute-2 ceph-mon[77081]: pgmap v1228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:00 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:01.041+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:01 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:02.001+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:02.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:02 compute-2 ceph-mon[77081]: pgmap v1229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:02 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:02.977+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:03 compute-2 podman[236037]: 2026-01-22 14:09:03.068684638 +0000 UTC m=+0.129106276 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:09:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:03.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:04.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:04 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:04 compute-2 ceph-mon[77081]: pgmap v1230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:05.005+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:05 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:05 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:06.037+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:06 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:06 compute-2 ceph-mon[77081]: pgmap v1231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:07.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:07 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:07 compute-2 sudo[236066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:07 compute-2 sudo[236066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:07 compute-2 sudo[236066]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:07 compute-2 sudo[236091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:07 compute-2 sudo[236091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:07 compute-2 sudo[236091]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:08.057+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:08 compute-2 ceph-mon[77081]: pgmap v1232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:09.016+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:09 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:10.059+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:10 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:10 compute-2 ceph-mon[77081]: pgmap v1233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:10 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:10.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:11.104+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:11 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:12.119+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:12 compute-2 ceph-mon[77081]: pgmap v1234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:13.132+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:13 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:14.154+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:14 compute-2 ceph-mon[77081]: pgmap v1235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:14.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:15.106+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:15 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:15 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:16.074+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:16 compute-2 ceph-mon[77081]: pgmap v1236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:16.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:17.112+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:17 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:18 compute-2 podman[236121]: 2026-01-22 14:09:18.025586985 +0000 UTC m=+0.087949893 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 14:09:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:09:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:09:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:09:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:09:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:18.142+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:18.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:18 compute-2 ceph-mon[77081]: pgmap v1237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:09:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:09:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:19.172+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:20.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:09:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:20.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:09:20 compute-2 ceph-mon[77081]: pgmap v1238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:20 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:20 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:21.234+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:22.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:22.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:22 compute-2 ceph-mon[77081]: pgmap v1239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:22 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:23.187+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:24.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:24 compute-2 ceph-mon[77081]: pgmap v1240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:24 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:25.168+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:26.139+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:26.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:27 compute-2 ceph-mon[77081]: pgmap v1241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:27.140+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:27 compute-2 sudo[236147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:27 compute-2 sudo[236147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:27 compute-2 sudo[236147]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:27 compute-2 sudo[236172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:27 compute-2 sudo[236172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:27 compute-2 sudo[236172]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:28 compute-2 ceph-mon[77081]: pgmap v1242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:28.132+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:29 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:29.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:30 compute-2 ceph-mon[77081]: pgmap v1243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:30 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:30.197+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:31.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:32.212+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:32 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:32 compute-2 ceph-mon[77081]: pgmap v1244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:32.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:32 compute-2 nova_compute[226433]: 2026-01-22 14:09:32.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:32.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:33.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:34 compute-2 podman[236200]: 2026-01-22 14:09:34.013918544 +0000 UTC m=+0.074509852 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:09:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:34.195+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:34 compute-2 ceph-mon[77081]: pgmap v1245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:34 compute-2 nova_compute[226433]: 2026-01-22 14:09:34.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:34 compute-2 nova_compute[226433]: 2026-01-22 14:09:34.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:34.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:35.201+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:35 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1962 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:36.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:36 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:36 compute-2 ceph-mon[77081]: pgmap v1246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2613868033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:09:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:36.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.870 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:36 compute-2 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:09:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:37.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1472558186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:09:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:09:37 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1465988269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:37 compute-2 nova_compute[226433]: 2026-01-22 14:09:37.954 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.145 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.146 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5192MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.146 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.146 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.221 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.222 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.222 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:09:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:38.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.268 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:09:38 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:38 compute-2 ceph-mon[77081]: pgmap v1247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1465988269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:38.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:09:38 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1883155954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.695 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.700 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.715 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.717 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:09:38 compute-2 nova_compute[226433]: 2026-01-22 14:09:38.717 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:09:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:39.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:39 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1883155954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:09:39 compute-2 nova_compute[226433]: 2026-01-22 14:09:39.713 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:39 compute-2 nova_compute[226433]: 2026-01-22 14:09:39.714 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:40.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:40 compute-2 sudo[236273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:40 compute-2 sudo[236273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-2 sudo[236273]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:40 compute-2 sudo[236298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:09:40 compute-2 sudo[236298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-2 sudo[236298]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:40.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:40 compute-2 sudo[236324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:40 compute-2 sudo[236324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:40 compute-2 sudo[236324]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:40 compute-2 ceph-mon[77081]: pgmap v1248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:40 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:40 compute-2 sudo[236349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:09:40 compute-2 sudo[236349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:41.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:41 compute-2 sudo[236349]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:41 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:09:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:09:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:09:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:09:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:09:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:09:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:42.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:42 compute-2 ceph-mon[77081]: pgmap v1249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:42 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:43.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:43 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:44.173+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:44 compute-2 nova_compute[226433]: 2026-01-22 14:09:44.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:09:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:44.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:44 compute-2 ceph-mon[77081]: pgmap v1250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:44 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:45.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:45 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1972 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:45 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:46.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:46.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:46 compute-2 ceph-mon[77081]: pgmap v1251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:46 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:09:47.179 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:09:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:09:47.180 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:09:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:09:47.180 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:09:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:47.252+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:47 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:47 compute-2 sudo[236411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:47 compute-2 sudo[236411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:47 compute-2 sudo[236411]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:47 compute-2 sudo[236436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:47 compute-2 sudo[236436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:47 compute-2 sudo[236436]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:48.266+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:48.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:48 compute-2 sudo[236462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:09:48 compute-2 sudo[236462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:48 compute-2 sudo[236462]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:48 compute-2 sudo[236493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:09:48 compute-2 podman[236486]: 2026-01-22 14:09:48.737251241 +0000 UTC m=+0.062092484 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 14:09:48 compute-2 sudo[236493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:09:48 compute-2 sudo[236493]: pam_unix(sudo:session): session closed for user root
Jan 22 14:09:48 compute-2 ceph-mon[77081]: pgmap v1252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:09:48 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:09:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:49.312+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:50 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:50 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:50.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:50.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:50 compute-2 ceph-mon[77081]: pgmap v1253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:50 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:51.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:52 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:52.308+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:52.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:53.264+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:53 compute-2 ceph-mon[77081]: pgmap v1254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:53 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:54.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:54 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:54 compute-2 ceph-mon[77081]: pgmap v1255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:54.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:55.266+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:55 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:09:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:09:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:56.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:09:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:09:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:56.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:56 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:56 compute-2 ceph-mon[77081]: pgmap v1256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:57.259+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:57 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:57 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:58.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:58.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:09:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:09:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:58.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:09:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:59.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:09:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:09:59 compute-2 ceph-mon[77081]: pgmap v1257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:09:59 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:00.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:01 compute-2 sshd-session[236537]: Invalid user solana from 45.148.10.240 port 44484
Jan 22 14:10:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:01 compute-2 sshd-session[236537]: Connection closed by invalid user solana 45.148.10.240 port 44484 [preauth]
Jan 22 14:10:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:01.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:01 compute-2 ceph-mon[77081]: pgmap v1258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:01 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:01 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 14:10:01 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 14:10:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:02.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:02.358+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:02.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:02 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:02 compute-2 ceph-mon[77081]: pgmap v1259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:03.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:04 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:04 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:04.426+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:04.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:05 compute-2 podman[236542]: 2026-01-22 14:10:05.041553316 +0000 UTC m=+0.099653128 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:10:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:05.414+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:05 compute-2 ceph-mon[77081]: pgmap v1260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:05 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:05 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:06.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:06 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:06 compute-2 ceph-mon[77081]: pgmap v1261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:06.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:06.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:07 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:07.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:07 compute-2 sudo[236570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:07 compute-2 sudo[236570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:07 compute-2 sudo[236570]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:08 compute-2 sudo[236595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:08 compute-2 sudo[236595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:08 compute-2 sudo[236595]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:08.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:08 compute-2 ceph-mon[77081]: pgmap v1262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:08.452+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:09 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:09.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:09 compute-2 nova_compute[226433]: 2026-01-22 14:10:09.923 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:09 compute-2 nova_compute[226433]: 2026-01-22 14:10:09.924 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:09 compute-2 nova_compute[226433]: 2026-01-22 14:10:09.951 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:10:09 compute-2 nova_compute[226433]: 2026-01-22 14:10:09.994 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:09 compute-2 nova_compute[226433]: 2026-01-22 14:10:09.994 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.031 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.061 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.062 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.069 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.069 226437 INFO nova.compute.claims [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.143 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.265 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:10.404+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:10 compute-2 ceph-mon[77081]: pgmap v1263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:10 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:10 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:10.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:10:10 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1847141595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.732 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.740 226437 DEBUG nova.compute.provider_tree [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.762 226437 DEBUG nova.scheduler.client.report [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.788 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.790 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.797 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.805 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.806 226437 INFO nova.compute.claims [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.847 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.848 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.908 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:10:10 compute-2 nova_compute[226433]: 2026-01-22 14:10:10.945 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.059 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.091 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.093 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.094 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Creating image(s)
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.126 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.153 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.185 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.192 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.241 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Automatically allocating a network for project e6c399bf43074b81b45ca1d976cb2b18. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Jan 22 14:10:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:11.441+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:10:11 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2985836776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.494 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.501 226437 DEBUG nova.compute.provider_tree [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.519 226437 DEBUG nova.scheduler.client.report [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.545 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.546 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:10:11 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:11 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1847141595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4089952682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2985836776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.567 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.568 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.569 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.569 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.599 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.602 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.628 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.629 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.659 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.677 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.825 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.827 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.828 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Creating image(s)
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.867 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.900 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.923 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.927 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.959 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.994 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.995 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.995 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:11 compute-2 nova_compute[226433]: 2026-01-22 14:10:11.996 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.018 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.021 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 2314cf64-76a5-4383-8f2e-58228261f71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.085 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] resizing rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.243 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c72e43b-d26a-47b8-ab7d-739190e552a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.268 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.269 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Ensure instance console log exists: /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.269 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.270 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.270 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.299 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 2314cf64-76a5-4383-8f2e-58228261f71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.364 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] resizing rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 22 14:10:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:12.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.458 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Automatically allocating a network for project e6c399bf43074b81b45ca1d976cb2b18. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.467 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'migration_context' on Instance uuid 2314cf64-76a5-4383-8f2e-58228261f71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:10:12 compute-2 ceph-mon[77081]: pgmap v1264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.572 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.572 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Ensure instance console log exists: /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.573 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.573 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:12 compute-2 nova_compute[226433]: 2026-01-22 14:10:12.573 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:12.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:12 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:12.851 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:10:12 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:12.853 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:10:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:13.442+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:13 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:14.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:14.471+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:14 compute-2 ceph-mon[77081]: pgmap v1265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 236 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:14.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:15.446+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:15 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2002 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:15 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:16.490+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:16 compute-2 ceph-mon[77081]: pgmap v1266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:17.534+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:18.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:18.553+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:18 compute-2 ceph-mon[77081]: pgmap v1267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2933046963' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:10:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2933046963' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:10:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:18.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:18 compute-2 podman[237002]: 2026-01-22 14:10:18.994332325 +0000 UTC m=+0.052203112 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 14:10:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:19.593+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:20.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:20.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:20.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:20 compute-2 ceph-mon[77081]: pgmap v1268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:20 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:20 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:20 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:20.855 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:21.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:22.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:22.658+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:23 compute-2 ceph-mon[77081]: pgmap v1269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:23.621+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:24 compute-2 ceph-mon[77081]: pgmap v1270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:24.573+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:25 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:25.615+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:26.657+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:27 compute-2 ceph-mon[77081]: pgmap v1271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 22 14:10:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:27.680+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:28 compute-2 sudo[237026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:28 compute-2 sudo[237026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:28 compute-2 sudo[237026]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:28 compute-2 sudo[237051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:28 compute-2 sudo[237051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:28 compute-2 sudo[237051]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:28.677+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:28 compute-2 ceph-mon[77081]: pgmap v1272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:29.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:30.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:30.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:30.756+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:31 compute-2 ceph-mon[77081]: pgmap v1273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:31 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:31.733+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:32 compute-2 nova_compute[226433]: 2026-01-22 14:10:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:32 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:32 compute-2 ceph-mon[77081]: pgmap v1274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:32.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:32.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:33.757+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:34 compute-2 ceph-mon[77081]: pgmap v1275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:34.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:34.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1327466574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:35 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:35 compute-2 nova_compute[226433]: 2026-01-22 14:10:35.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:35 compute-2 nova_compute[226433]: 2026-01-22 14:10:35.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:10:35 compute-2 nova_compute[226433]: 2026-01-22 14:10:35.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:10:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:35.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:36 compute-2 podman[237080]: 2026-01-22 14:10:36.014394571 +0000 UTC m=+0.081819756 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 14:10:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.225 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Automatically allocated network: {'id': '18c81f01-33be-49a1-a179-aecc87794f99', 'name': 'auto_allocated_network', 'tenant_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['41485253-d693-4726-824d-ace746b534e1', '9c3d77fd-5c90-4745-9c8a-c335ad8bf441'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-22T14:10:12Z', 'updated_at': '2026-01-22T14:10:26Z', 'revision_number': 4, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.226 226437 DEBUG nova.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fd58a5335a8745f1b3ce1bd9a0439003', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:10:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:36 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:36 compute-2 ceph-mon[77081]: pgmap v1276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/970741415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2036072568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.543 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.565 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.567 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:36 compute-2 nova_compute[226433]: 2026-01-22 14:10:36.567 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:36.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:36.761+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.249 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Automatically allocated network: {'id': '18c81f01-33be-49a1-a179-aecc87794f99', 'name': 'auto_allocated_network', 'tenant_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['41485253-d693-4726-824d-ace746b534e1', '9c3d77fd-5c90-4745-9c8a-c335ad8bf441'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-22T14:10:12Z', 'updated_at': '2026-01-22T14:10:26Z', 'revision_number': 4, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.250 226437 DEBUG nova.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fd58a5335a8745f1b3ce1bd9a0439003', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.424 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Successfully created port: 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:10:37 compute-2 nova_compute[226433]: 2026-01-22 14:10:37.548 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:37.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:38 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/187506382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:10:38 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2167703012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:38 compute-2 nova_compute[226433]: 2026-01-22 14:10:38.581 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:38.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:38.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:38 compute-2 nova_compute[226433]: 2026-01-22 14:10:38.731 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:10:38 compute-2 nova_compute[226433]: 2026-01-22 14:10:38.732 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5146MB free_disk=20.888916015625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:10:38 compute-2 nova_compute[226433]: 2026-01-22 14:10:38.732 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:38 compute-2 nova_compute[226433]: 2026-01-22 14:10:38.732 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:38 compute-2 nova_compute[226433]: 2026-01-22 14:10:38.985 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Successfully created port: 1bf106b6-ded0-49a9-a53d-2c3faebdf840 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 0c72e43b-d26a-47b8-ab7d-739190e552a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 2314cf64-76a5-4383-8f2e-58228261f71b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.178 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.178 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:10:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:39 compute-2 ceph-mon[77081]: pgmap v1277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:39 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2167703012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.355 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Successfully updated port: 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.420 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:39.678+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.686 226437 DEBUG nova.compute.manager [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-changed-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.687 226437 DEBUG nova.compute.manager [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Refreshing instance network info cache due to event network-changed-3fe867d7-5ecf-4683-85f1-5f2bdce33a78. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.687 226437 DEBUG oslo_concurrency.lockutils [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.688 226437 DEBUG oslo_concurrency.lockutils [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.688 226437 DEBUG nova.network.neutron [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Refreshing network info cache for port 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.702 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:10:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:10:39 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2822757333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.924 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.929 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.961 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:10:39 compute-2 nova_compute[226433]: 2026-01-22 14:10:39.998 226437 DEBUG nova.network.neutron [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:10:40 compute-2 nova_compute[226433]: 2026-01-22 14:10:40.004 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:10:40 compute-2 nova_compute[226433]: 2026-01-22 14:10:40.004 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:40 compute-2 ceph-mon[77081]: pgmap v1278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:10:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2822757333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:10:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:40.665+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:41 compute-2 nova_compute[226433]: 2026-01-22 14:10:41.018 226437 DEBUG nova.network.neutron [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:10:41 compute-2 nova_compute[226433]: 2026-01-22 14:10:41.072 226437 DEBUG oslo_concurrency.lockutils [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:10:41 compute-2 nova_compute[226433]: 2026-01-22 14:10:41.073 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquired lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:10:41 compute-2 nova_compute[226433]: 2026-01-22 14:10:41.074 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:10:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:41 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:41 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:41.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:42.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:42 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:42 compute-2 ceph-mon[77081]: pgmap v1279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 22 14:10:42 compute-2 nova_compute[226433]: 2026-01-22 14:10:42.490 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:10:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:42.648+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:43 compute-2 nova_compute[226433]: 2026-01-22 14:10:42.999 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:43 compute-2 nova_compute[226433]: 2026-01-22 14:10:43.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:43 compute-2 nova_compute[226433]: 2026-01-22 14:10:43.252 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Successfully updated port: 1bf106b6-ded0-49a9-a53d-2c3faebdf840 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:10:43 compute-2 nova_compute[226433]: 2026-01-22 14:10:43.339 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:10:43 compute-2 nova_compute[226433]: 2026-01-22 14:10:43.339 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquired lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:10:43 compute-2 nova_compute[226433]: 2026-01-22 14:10:43.340 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:10:43 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:43.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:44 compute-2 nova_compute[226433]: 2026-01-22 14:10:44.254 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:10:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:44 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:44 compute-2 ceph-mon[77081]: pgmap v1280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 22 14:10:44 compute-2 nova_compute[226433]: 2026-01-22 14:10:44.574 226437 DEBUG nova.compute.manager [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-changed-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:10:44 compute-2 nova_compute[226433]: 2026-01-22 14:10:44.575 226437 DEBUG nova.compute.manager [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Refreshing instance network info cache due to event network-changed-1bf106b6-ded0-49a9-a53d-2c3faebdf840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:10:44 compute-2 nova_compute[226433]: 2026-01-22 14:10:44.575 226437 DEBUG oslo_concurrency.lockutils [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:10:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:44.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:44.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:45 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:45 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:45.684+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:46 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:46 compute-2 ceph-mon[77081]: pgmap v1281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:46.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:47.180 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:47.181 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:47.181 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:47.712+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:47 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:47 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.853 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Updating instance_info_cache with network_info: [{"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.925 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Releasing lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.925 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance network_info: |[{"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.927 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start _get_guest_xml network_info=[{"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.930 226437 WARNING nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.941 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.942 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.953 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.954 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.955 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.955 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.955 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.960 226437 DEBUG nova.privsep.utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 22 14:10:47 compute-2 nova_compute[226433]: 2026-01-22 14:10:47.961 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:48 compute-2 sudo[237177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:48.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:10:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/61830410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:48 compute-2 sudo[237177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:48 compute-2 sudo[237177]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.378 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.406 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.410 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:48 compute-2 sudo[237204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:48 compute-2 sudo[237204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:48 compute-2 sudo[237204]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.518 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:10:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:48.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:48.677+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.684 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updating instance_info_cache with network_info: [{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.702 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Releasing lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.703 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance network_info: |[{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.703 226437 DEBUG oslo_concurrency.lockutils [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.704 226437 DEBUG nova.network.neutron [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Refreshing network info cache for port 1bf106b6-ded0-49a9-a53d-2c3faebdf840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.706 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start _get_guest_xml network_info=[{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.710 226437 WARNING nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.735 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.736 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.753 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.754 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.755 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.755 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.755 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.760 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:48 compute-2 sudo[237268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:48 compute-2 sudo[237268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:48 compute-2 sudo[237268]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:10:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1226628873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.842 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.844 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-2',id=6,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:11Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=2314cf64-76a5-4383-8f2e-58228261f71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.844 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.846 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.849 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2314cf64-76a5-4383-8f2e-58228261f71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.962 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <uuid>2314cf64-76a5-4383-8f2e-58228261f71b</uuid>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <name>instance-00000006</name>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <memory>131072</memory>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:name>tempest-tempest.common.compute-instance-811251323-2</nova:name>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:10:47</nova:creationTime>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:flavor name="m1.nano">
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:memory>128</nova:memory>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:user uuid="fd58a5335a8745f1b3ce1bd9a0439003">tempest-AutoAllocateNetworkTest-687426125-project-member</nova:user>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:project uuid="e6c399bf43074b81b45ca1d976cb2b18">tempest-AutoAllocateNetworkTest-687426125</nova:project>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <nova:ports>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <nova:port uuid="3fe867d7-5ecf-4683-85f1-5f2bdce33a78">
Jan 22 14:10:48 compute-2 nova_compute[226433]:           <nova:ip type="fixed" address="10.1.0.8" ipVersion="4"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:           <nova:ip type="fixed" address="fdfe:381f:8400::3c7" ipVersion="6"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         </nova:port>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </nova:ports>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <system>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <entry name="serial">2314cf64-76a5-4383-8f2e-58228261f71b</entry>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <entry name="uuid">2314cf64-76a5-4383-8f2e-58228261f71b</entry>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </system>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <os>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </os>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <features>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </features>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/2314cf64-76a5-4383-8f2e-58228261f71b_disk">
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </source>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <target dev="vda" bus="virtio"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/2314cf64-76a5-4383-8f2e-58228261f71b_disk.config">
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </source>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:10:48 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <target dev="sda" bus="sata"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <interface type="ethernet">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <mac address="fa:16:3e:c1:38:78"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <driver name="vhost" rx_queue_size="512"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <mtu size="1442"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <target dev="tap3fe867d7-5e"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </interface>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/console.log" append="off"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <video>
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </video>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:10:48 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:10:48 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:10:48 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:10:48 compute-2 nova_compute[226433]: </domain>
Jan 22 14:10:48 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.964 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Preparing to wait for external event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.964 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.964 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.965 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.965 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-2',id=6,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:11Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=2314cf64-76a5-4383-8f2e-58228261f71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.966 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.967 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:10:48 compute-2 nova_compute[226433]: 2026-01-22 14:10:48.968 226437 DEBUG os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 22 14:10:48 compute-2 ceph-mon[77081]: pgmap v1282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:48 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/61830410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.010 226437 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.011 226437 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.011 226437 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.011 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.012 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.013 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.013 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.014 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.017 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:49 compute-2 sudo[237315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:10:49 compute-2 sudo[237315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:49 compute-2 sudo[237315]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:49 compute-2 podman[237339]: 2026-01-22 14:10:49.089522999 +0000 UTC m=+0.042017103 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 22 14:10:49 compute-2 sudo[237346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:49 compute-2 sudo[237346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:49 compute-2 sudo[237346]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.140 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.141 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.141 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.143 226437 INFO oslo.privsep.daemon [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpwbd1s1u6/privsep.sock']
Jan 22 14:10:49 compute-2 sudo[237383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:10:49 compute-2 sudo[237383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:10:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1368713803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.211 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.236 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.241 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:10:49 compute-2 sudo[237383]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:10:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3062574627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:49.654+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.659 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.661 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-1',id=5,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:10Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=0c72e43b-d26a-47b8-ab7d-739190e552a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.661 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.662 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.663 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c72e43b-d26a-47b8-ab7d-739190e552a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.770 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <uuid>0c72e43b-d26a-47b8-ab7d-739190e552a5</uuid>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <name>instance-00000005</name>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <memory>131072</memory>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:name>tempest-tempest.common.compute-instance-811251323-1</nova:name>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:10:48</nova:creationTime>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:flavor name="m1.nano">
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:memory>128</nova:memory>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:user uuid="fd58a5335a8745f1b3ce1bd9a0439003">tempest-AutoAllocateNetworkTest-687426125-project-member</nova:user>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:project uuid="e6c399bf43074b81b45ca1d976cb2b18">tempest-AutoAllocateNetworkTest-687426125</nova:project>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <nova:ports>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <nova:port uuid="1bf106b6-ded0-49a9-a53d-2c3faebdf840">
Jan 22 14:10:49 compute-2 nova_compute[226433]:           <nova:ip type="fixed" address="10.1.0.29" ipVersion="4"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:           <nova:ip type="fixed" address="fdfe:381f:8400::7d" ipVersion="6"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         </nova:port>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </nova:ports>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <system>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <entry name="serial">0c72e43b-d26a-47b8-ab7d-739190e552a5</entry>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <entry name="uuid">0c72e43b-d26a-47b8-ab7d-739190e552a5</entry>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </system>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <os>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </os>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <features>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </features>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/0c72e43b-d26a-47b8-ab7d-739190e552a5_disk">
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </source>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <target dev="vda" bus="virtio"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config">
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </source>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:10:49 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <target dev="sda" bus="sata"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <interface type="ethernet">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <mac address="fa:16:3e:91:f4:90"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <driver name="vhost" rx_queue_size="512"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <mtu size="1442"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <target dev="tap1bf106b6-de"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </interface>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/console.log" append="off"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <video>
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </video>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:10:49 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:10:49 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:10:49 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:10:49 compute-2 nova_compute[226433]: </domain>
Jan 22 14:10:49 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.784 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Preparing to wait for external event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.784 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.785 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.785 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.785 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-1',id=5,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:10Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=0c72e43b-d26a-47b8-ab7d-739190e552a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.786 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.786 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.787 226437 DEBUG os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.787 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.787 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.788 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.821 226437 INFO oslo.privsep.daemon [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Spawned new privsep daemon via rootwrap
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.700 237484 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.703 237484 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.705 237484 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.705 237484 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237484
Jan 22 14:10:49 compute-2 nova_compute[226433]: 2026-01-22 14:10:49.825 226437 WARNING oslo_privsep.priv_context [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] privsep daemon already running
Jan 22 14:10:50 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1226628873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1368713803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3062574627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:10:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.201 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fe867d7-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.202 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fe867d7-5e, col_values=(('external_ids', {'iface-id': '3fe867d7-5ecf-4683-85f1-5f2bdce33a78', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:38:78', 'vm-uuid': '2314cf64-76a5-4383-8f2e-58228261f71b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.203 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:50 compute-2 NetworkManager[49000]: <info>  [1769091050.2050] manager: (tap3fe867d7-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.206 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.212 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.213 226437 INFO os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e')
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.215 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.215 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bf106b6-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.216 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bf106b6-de, col_values=(('external_ids', {'iface-id': '1bf106b6-ded0-49a9-a53d-2c3faebdf840', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:f4:90', 'vm-uuid': '0c72e43b-d26a-47b8-ab7d-739190e552a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:50 compute-2 NetworkManager[49000]: <info>  [1769091050.2185] manager: (tap1bf106b6-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.218 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.222 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.227 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.228 226437 INFO os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de')
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No VIF found with MAC fa:16:3e:91:f4:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Using config drive
Jan 22 14:10:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.374 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.385 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.386 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.386 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No VIF found with MAC fa:16:3e:c1:38:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.386 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Using config drive
Jan 22 14:10:50 compute-2 nova_compute[226433]: 2026-01-22 14:10:50.411 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:50.635+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:50.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:51 compute-2 ceph-mon[77081]: pgmap v1283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:51 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:51 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:51.617+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:52 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:52 compute-2 ceph-mon[77081]: pgmap v1284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:10:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:52.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:52.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:53 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.181 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Creating config drive at /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.186 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp05_2ig4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.312 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp05_2ig4e" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.338 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.341 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.415 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.620 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.621 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deleting local config drive /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config because it was imported into RBD.
Jan 22 14:10:53 compute-2 systemd[1]: Starting libvirt secret daemon...
Jan 22 14:10:53 compute-2 systemd[1]: Started libvirt secret daemon.
Jan 22 14:10:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:53.678+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:53 compute-2 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 22 14:10:53 compute-2 NetworkManager[49000]: <info>  [1769091053.7084] manager: (tap3fe867d7-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Jan 22 14:10:53 compute-2 kernel: tap3fe867d7-5e: entered promiscuous mode
Jan 22 14:10:53 compute-2 ovn_controller[133156]: 2026-01-22T14:10:53Z|00027|binding|INFO|Claiming lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for this chassis.
Jan 22 14:10:53 compute-2 ovn_controller[133156]: 2026-01-22T14:10:53Z|00028|binding|INFO|3fe867d7-5ecf-4683-85f1-5f2bdce33a78: Claiming fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.712 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.718 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:53 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.739 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], port_security=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.8/26 fdfe:381f:8400::3c7/64', 'neutron:device_id': '2314cf64-76a5-4383-8f2e-58228261f71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=3fe867d7-5ecf-4683-85f1-5f2bdce33a78) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:10:53 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.740 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 bound to our chassis
Jan 22 14:10:53 compute-2 systemd-udevd[237607]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 14:10:53 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.742 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 14:10:53 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.743 143497 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_pg3kwj0/privsep.sock']
Jan 22 14:10:53 compute-2 NetworkManager[49000]: <info>  [1769091053.7559] device (tap3fe867d7-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 14:10:53 compute-2 NetworkManager[49000]: <info>  [1769091053.7566] device (tap3fe867d7-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 14:10:53 compute-2 systemd-machined[194970]: New machine qemu-1-instance-00000006.
Jan 22 14:10:53 compute-2 systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.793 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:53 compute-2 ovn_controller[133156]: 2026-01-22T14:10:53Z|00029|binding|INFO|Setting lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 ovn-installed in OVS
Jan 22 14:10:53 compute-2 ovn_controller[133156]: 2026-01-22T14:10:53Z|00030|binding|INFO|Setting lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 up in Southbound
Jan 22 14:10:53 compute-2 nova_compute[226433]: 2026-01-22 14:10:53.804 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.137 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Creating config drive at /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.141 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1anmr98 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:54 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:54 compute-2 ceph-mon[77081]: pgmap v1285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 265 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.264 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1anmr98" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.309 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.312 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.329 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091054.2725675, 2314cf64-76a5-4383-8f2e-58228261f71b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.330 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Started (Lifecycle Event)
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.361 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.365 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091054.27269, 2314cf64-76a5-4383-8f2e-58228261f71b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.365 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Paused (Lifecycle Event)
Jan 22 14:10:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:10:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.399 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.401 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.417 143497 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.418 143497 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_pg3kwj0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.289 237689 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.292 237689 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.294 237689 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.294 237689 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237689
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.421 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[e6abe1b3-6425-40f4-9cd0-3153fabe1009]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.431 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.456 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.457 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deleting local config drive /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config because it was imported into RBD.
Jan 22 14:10:54 compute-2 NetworkManager[49000]: <info>  [1769091054.5097] manager: (tap1bf106b6-de): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Jan 22 14:10:54 compute-2 systemd-udevd[237605]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 14:10:54 compute-2 kernel: tap1bf106b6-de: entered promiscuous mode
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.514 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:54 compute-2 ovn_controller[133156]: 2026-01-22T14:10:54Z|00031|binding|INFO|Claiming lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 for this chassis.
Jan 22 14:10:54 compute-2 ovn_controller[133156]: 2026-01-22T14:10:54Z|00032|binding|INFO|1bf106b6-ded0-49a9-a53d-2c3faebdf840: Claiming fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.520 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:54 compute-2 NetworkManager[49000]: <info>  [1769091054.5294] device (tap1bf106b6-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 14:10:54 compute-2 NetworkManager[49000]: <info>  [1769091054.5298] device (tap1bf106b6-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.529 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], port_security=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.29/26 fdfe:381f:8400::7d/64', 'neutron:device_id': '0c72e43b-d26a-47b8-ab7d-739190e552a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=1bf106b6-ded0-49a9-a53d-2c3faebdf840) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:10:54 compute-2 ovn_controller[133156]: 2026-01-22T14:10:54Z|00033|binding|INFO|Setting lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 ovn-installed in OVS
Jan 22 14:10:54 compute-2 ovn_controller[133156]: 2026-01-22T14:10:54Z|00034|binding|INFO|Setting lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 up in Southbound
Jan 22 14:10:54 compute-2 nova_compute[226433]: 2026-01-22 14:10:54.538 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:54 compute-2 systemd-machined[194970]: New machine qemu-2-instance-00000005.
Jan 22 14:10:54 compute-2 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 22 14:10:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:54.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:54.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:10:54 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.983 237689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.983 237689 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:54 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.983 237689 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.054 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091055.0545008, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.055 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Started (Lifecycle Event)
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.095 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.098 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091055.0545843, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.098 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Paused (Lifecycle Event)
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.151 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.154 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:10:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.201 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.218 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.253 226437 DEBUG nova.network.neutron [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updated VIF entry in instance network info cache for port 1bf106b6-ded0-49a9-a53d-2c3faebdf840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.254 226437 DEBUG nova.network.neutron [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updating instance_info_cache with network_info: [{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:10:55 compute-2 nova_compute[226433]: 2026-01-22 14:10:55.289 226437 DEBUG oslo_concurrency.lockutils [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.643 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7504ef-5ea4-4763-9186-1550852eb8cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.644 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18c81f01-31 in ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.646 237689 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18c81f01-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.646 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[27b237f2-aeb9-4d1b-a6e0-9a33e2cbc757]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.649 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[24a4ce15-afd3-49dc-acb6-7c72f965d268]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:55.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.670 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[b8030c21-55bc-4838-9e28-f185a3e3601f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.694 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[51a2ef51-e907-4b29-a57b-3332a7821ff7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.696 143497 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp3y50ov6x/privsep.sock']
Jan 22 14:10:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:10:56 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:10:56 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:10:56 compute-2 ceph-mon[77081]: pgmap v1286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 276 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 109 op/s
Jan 22 14:10:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:56.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.370 226437 DEBUG nova.compute.manager [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG oslo_concurrency.lockutils [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG oslo_concurrency.lockutils [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG oslo_concurrency.lockutils [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG nova.compute.manager [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Processing event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.372 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.376 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.376 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091056.3766308, 2314cf64-76a5-4383-8f2e-58228261f71b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.377 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Resumed (Lifecycle Event)
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.382 226437 INFO nova.virt.libvirt.driver [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance spawned successfully.
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.382 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.398 143497 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.399 143497 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3y50ov6x/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.254 237788 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.258 237788 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.260 237788 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.260 237788 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237788
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.401 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[d4af23fb-8ec3-4848-9de5-8532433215f2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.476 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.486 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.493 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.494 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.495 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.496 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.497 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.500 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.565 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.618 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 44.79 seconds to spawn the instance on the hypervisor.
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.619 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:56.676+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:56.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.711 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 46.59 seconds to build instance.
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.750 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 46.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.916 237788 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.916 237788 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.916 237788 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.975 226437 DEBUG nova.compute.manager [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG oslo_concurrency.lockutils [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG oslo_concurrency.lockutils [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG oslo_concurrency.lockutils [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG nova.compute.manager [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Processing event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.977 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.980 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091056.980116, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.980 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Resumed (Lifecycle Event)
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.982 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.984 226437 INFO nova.virt.libvirt.driver [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance spawned successfully.
Jan 22 14:10:56 compute-2 nova_compute[226433]: 2026-01-22 14:10:56.985 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.025 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.029 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.066 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.067 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.067 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.068 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.068 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.068 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.072 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.212 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 46.12 seconds to spawn the instance on the hypervisor.
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.212 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:10:57 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.310 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 47.28 seconds to build instance.
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.388 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 47.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.526 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[e163408c-4062-45a8-a111-26c3f8c4f82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.544 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[fa320ec6-6547-494e-b615-80e18c454830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 NetworkManager[49000]: <info>  [1769091057.5508] manager: (tap18c81f01-30): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Jan 22 14:10:57 compute-2 systemd-udevd[237801]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.573 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[35ecab8b-8761-4b9c-ba58-b6ddfc1e8e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.576 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[da3ec314-8faa-424a-a895-343eb0cd5c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 NetworkManager[49000]: <info>  [1769091057.5989] device (tap18c81f01-30): carrier: link connected
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.602 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c52cb-d3b2-47a4-aad2-b6f975519ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.617 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec81daa-d6b4-46ff-9d59-2ee90e9ac2dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237819, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.636 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0230b09b-487d-40b8-bf41-ac8ae3813b03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:9efc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490830, 'tstamp': 490830}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237820, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.650 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b72157e6-9adb-43ca-9f8f-46ce76fb167d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237821, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.672 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[708272f9-9b10-4571-bfba-fbdf6c504bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:57.676+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.719 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[88963e28-03e2-4534-bd72-48eee44ad4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.721 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.721 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.722 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.724 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:57 compute-2 NetworkManager[49000]: <info>  [1769091057.7246] manager: (tap18c81f01-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 22 14:10:57 compute-2 kernel: tap18c81f01-30: entered promiscuous mode
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.726 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.728 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:57 compute-2 ovn_controller[133156]: 2026-01-22T14:10:57Z|00035|binding|INFO|Releasing lport 27625ef7-8ad4-4498-ac70-a911e819f701 from this chassis (sb_readonly=0)
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.729 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:57 compute-2 nova_compute[226433]: 2026-01-22 14:10:57.745 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.747 143497 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.748 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b64d1c25-9783-430c-b249-b51875b8d757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.751 143497 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: global
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     log         /dev/log local0 debug
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     log-tag     haproxy-metadata-proxy-18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     user        root
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     group       root
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     maxconn     1024
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     pidfile     /var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     daemon
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: defaults
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     log global
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     mode http
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     option httplog
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     option dontlognull
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     option http-server-close
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     option forwardfor
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     retries                 3
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     timeout http-request    30s
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     timeout connect         30s
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     timeout client          32s
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     timeout server          32s
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     timeout http-keep-alive 30s
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: listen listener
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     bind 169.254.169.254:80
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     server metadata /var/lib/neutron/metadata_proxy
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:     http-request add-header X-OVN-Network-ID 18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 22 14:10:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.753 143497 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'env', 'PROCESS_TAG=haproxy-18c81f01-33be-49a1-a179-aecc87794f99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18c81f01-33be-49a1-a179-aecc87794f99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 22 14:10:57 compute-2 sudo[237829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:10:57 compute-2 sudo[237829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:57 compute-2 sudo[237829]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:57 compute-2 sudo[237857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:10:57 compute-2 sudo[237857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:10:57 compute-2 sudo[237857]: pam_unix(sudo:session): session closed for user root
Jan 22 14:10:58 compute-2 podman[237904]: 2026-01-22 14:10:58.163775819 +0000 UTC m=+0.059168616 container create 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:10:58 compute-2 systemd[1]: Started libpod-conmon-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7.scope.
Jan 22 14:10:58 compute-2 podman[237904]: 2026-01-22 14:10:58.135583193 +0000 UTC m=+0.030976030 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 14:10:58 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:10:58 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f1a7dbb9fbf437360a4b9755ab2b91a6644c636b82f1e3d91c08d6fa81b3c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 14:10:58 compute-2 podman[237904]: 2026-01-22 14:10:58.268931652 +0000 UTC m=+0.164324479 container init 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:10:58 compute-2 podman[237904]: 2026-01-22 14:10:58.294474107 +0000 UTC m=+0.189866904 container start 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 14:10:58 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : New worker (237926) forked
Jan 22 14:10:58 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : Loading success.
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.355 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf106b6-ded0-49a9-a53d-2c3faebdf840 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.358 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 14:10:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:10:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.373 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[fea5eb6f-82f8-4fab-804e-eebbf828cb85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:58 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:10:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:10:58 compute-2 ceph-mon[77081]: pgmap v1287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 276 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.403 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[306f60a5-e170-4ed9-a7cb-21befa6f9c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.408 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee151a3-79f4-4d5f-ade5-5d53261571eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.416 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.438 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4941cc-64f2-43a5-a288-f8723255a487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.454 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[825b1fcf-11fd-4ccc-b3f9-891f485b5dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237940, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.470 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[1b592c4c-d3ea-48a4-a9e3-ffcd322cbe5f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490839, 'tstamp': 490839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237941, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 26, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.0.2'], ['IFA_LOCAL', '10.1.0.2'], ['IFA_BROADCAST', '10.1.0.63'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490842, 'tstamp': 490842}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237941, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.471 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.473 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.476 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.476 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.476 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:10:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.477 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.643 226437 DEBUG nova.compute.manager [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG oslo_concurrency.lockutils [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG oslo_concurrency.lockutils [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG oslo_concurrency.lockutils [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG nova.compute.manager [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] No waiting events found dispatching network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:10:58 compute-2 nova_compute[226433]: 2026-01-22 14:10:58.645 226437 WARNING nova.compute.manager [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received unexpected event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for instance with vm_state active and task_state None.
Jan 22 14:10:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:10:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:10:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:10:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:58.720+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:59 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:10:59 compute-2 nova_compute[226433]: 2026-01-22 14:10:59.528 226437 DEBUG nova.compute.manager [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:10:59 compute-2 nova_compute[226433]: 2026-01-22 14:10:59.529 226437 DEBUG oslo_concurrency.lockutils [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:10:59 compute-2 nova_compute[226433]: 2026-01-22 14:10:59.529 226437 DEBUG oslo_concurrency.lockutils [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:10:59 compute-2 nova_compute[226433]: 2026-01-22 14:10:59.530 226437 DEBUG oslo_concurrency.lockutils [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:10:59 compute-2 nova_compute[226433]: 2026-01-22 14:10:59.530 226437 DEBUG nova.compute.manager [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] No waiting events found dispatching network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:10:59 compute-2 nova_compute[226433]: 2026-01-22 14:10:59.530 226437 WARNING nova.compute.manager [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received unexpected event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 for instance with vm_state active and task_state None.
Jan 22 14:10:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:59.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:10:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:00 compute-2 nova_compute[226433]: 2026-01-22 14:11:00.220 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:00 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:00 compute-2 ceph-mon[77081]: pgmap v1288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 293 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 22 14:11:00 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 2048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:00.711+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:01 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:01.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:02 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:02 compute-2 ceph-mon[77081]: pgmap v1289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 298 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 22 14:11:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:02.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.758 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.761 226437 INFO nova.compute.manager [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Terminating instance
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.762 226437 DEBUG nova.compute.manager [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 22 14:11:02 compute-2 kernel: tap1bf106b6-de (unregistering): left promiscuous mode
Jan 22 14:11:02 compute-2 NetworkManager[49000]: <info>  [1769091062.8221] device (tap1bf106b6-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.836 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:02 compute-2 ovn_controller[133156]: 2026-01-22T14:11:02Z|00036|binding|INFO|Releasing lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 from this chassis (sb_readonly=0)
Jan 22 14:11:02 compute-2 ovn_controller[133156]: 2026-01-22T14:11:02Z|00037|binding|INFO|Setting lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 down in Southbound
Jan 22 14:11:02 compute-2 ovn_controller[133156]: 2026-01-22T14:11:02Z|00038|binding|INFO|Removing iface tap1bf106b6-de ovn-installed in OVS
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.840 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.847 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], port_security=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.29/26 fdfe:381f:8400::7d/64', 'neutron:device_id': '0c72e43b-d26a-47b8-ab7d-739190e552a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=1bf106b6-ded0-49a9-a53d-2c3faebdf840) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.848 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf106b6-ded0-49a9-a53d-2c3faebdf840 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.851 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.859 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.867 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7e29ecc5-bbc7-4d9b-9494-3e63f95df026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.898 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[2b605f6f-a761-47dd-a16d-0354621139c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.901 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7fef56-c21a-4e32-8aa0-fc218f343806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:02 compute-2 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 22 14:11:02 compute-2 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 6.431s CPU time.
Jan 22 14:11:02 compute-2 systemd-machined[194970]: Machine qemu-2-instance-00000005 terminated.
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.936 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1e5507-300d-466b-b618-8317c46c093f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.952 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[d77f5c8b-fdda-4516-b6c7-9527ee4a7044]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237956, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.966 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[c88de1e1-a447-478a-84b6-2a4e56b165ea]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490839, 'tstamp': 490839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237957, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 26, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.0.2'], ['IFA_LOCAL', '10.1.0.2'], ['IFA_BROADCAST', '10.1.0.63'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490842, 'tstamp': 490842}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237957, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.968 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.969 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.976 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.976 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.977 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.977 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.978 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.996 226437 INFO nova.virt.libvirt.driver [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance destroyed successfully.
Jan 22 14:11:02 compute-2 nova_compute[226433]: 2026-01-22 14:11:02.996 226437 DEBUG nova.objects.instance [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'resources' on Instance uuid 0c72e43b-d26a-47b8-ab7d-739190e552a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.025 226437 DEBUG nova.virt.libvirt.vif [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-1',id=5,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:10:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:10:57Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=0c72e43b-d26a-47b8-ab7d-739190e552a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.025 226437 DEBUG nova.network.os_vif_util [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.026 226437 DEBUG nova.network.os_vif_util [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.026 226437 DEBUG os_vif [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.028 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.028 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf106b6-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.029 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.031 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.033 226437 INFO os_vif [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de')
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.278 226437 DEBUG nova.compute.manager [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-unplugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG oslo_concurrency.lockutils [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG oslo_concurrency.lockutils [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG oslo_concurrency.lockutils [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG nova.compute.manager [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] No waiting events found dispatching network-vif-unplugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG nova.compute.manager [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-unplugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.419 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.521 226437 INFO nova.virt.libvirt.driver [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deleting instance files /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5_del
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.522 226437 INFO nova.virt.libvirt.driver [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deletion of /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5_del complete
Jan 22 14:11:03 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.648 226437 DEBUG nova.virt.libvirt.host [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.649 226437 INFO nova.virt.libvirt.host [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] UEFI support detected
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 INFO nova.compute.manager [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 DEBUG oslo.service.loopingcall [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 DEBUG nova.compute.manager [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 22 14:11:03 compute-2 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 DEBUG nova.network.neutron [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 22 14:11:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:03.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:04 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:04 compute-2 ceph-mon[77081]: pgmap v1290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 298 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 22 14:11:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:04.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG nova.compute.manager [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG oslo_concurrency.lockutils [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG oslo_concurrency.lockutils [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG oslo_concurrency.lockutils [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.536 226437 DEBUG nova.compute.manager [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] No waiting events found dispatching network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.536 226437 WARNING nova.compute.manager [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received unexpected event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 for instance with vm_state active and task_state deleting.
Jan 22 14:11:05 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:05 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:05 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:05.781+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.812 226437 DEBUG nova.network.neutron [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.842 226437 INFO nova.compute.manager [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 2.19 seconds to deallocate network for instance.
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.914 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.915 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:05 compute-2 nova_compute[226433]: 2026-01-22 14:11:05.984 226437 DEBUG nova.compute.manager [req-f97b4fcb-32a0-45f9-b287-ec7fdcfb7696 req-a95b3825-3443-426c-a753-d5295e3e6198 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-deleted-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:11:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:06 compute-2 nova_compute[226433]: 2026-01-22 14:11:06.436 226437 DEBUG oslo_concurrency.processutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:06 compute-2 ceph-mon[77081]: pgmap v1291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 235 op/s
Jan 22 14:11:06 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:06.772+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:11:06 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3749524116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:06 compute-2 nova_compute[226433]: 2026-01-22 14:11:06.887 226437 DEBUG oslo_concurrency.processutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:11:06 compute-2 nova_compute[226433]: 2026-01-22 14:11:06.892 226437 DEBUG nova.compute.provider_tree [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:11:06 compute-2 nova_compute[226433]: 2026-01-22 14:11:06.922 226437 DEBUG nova.scheduler.client.report [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:11:06 compute-2 nova_compute[226433]: 2026-01-22 14:11:06.959 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:06 compute-2 sshd-session[237990]: Connection closed by authenticating user root 92.118.39.95 port 47026 [preauth]
Jan 22 14:11:07 compute-2 nova_compute[226433]: 2026-01-22 14:11:07.018 226437 INFO nova.scheduler.client.report [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Deleted allocations for instance 0c72e43b-d26a-47b8-ab7d-739190e552a5
Jan 22 14:11:07 compute-2 podman[238015]: 2026-01-22 14:11:07.062441625 +0000 UTC m=+0.116237456 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:11:07 compute-2 nova_compute[226433]: 2026-01-22 14:11:07.197 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:07 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:07 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3749524116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:07.782+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.030 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.421 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:08 compute-2 sudo[238041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:08 compute-2 sudo[238041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:08 compute-2 sudo[238041]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:08 compute-2 sudo[238066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:08 compute-2 sudo[238066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:08 compute-2 sudo[238066]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:08 compute-2 ceph-mon[77081]: pgmap v1292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 727 KiB/s wr, 190 op/s
Jan 22 14:11:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:08.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:08.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.899 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.902 226437 INFO nova.compute.manager [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Terminating instance
Jan 22 14:11:08 compute-2 nova_compute[226433]: 2026-01-22 14:11:08.904 226437 DEBUG nova.compute.manager [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 22 14:11:09 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:09 compute-2 kernel: tap3fe867d7-5e (unregistering): left promiscuous mode
Jan 22 14:11:09 compute-2 NetworkManager[49000]: <info>  [1769091069.6921] device (tap3fe867d7-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 14:11:09 compute-2 ovn_controller[133156]: 2026-01-22T14:11:09Z|00039|binding|INFO|Releasing lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 from this chassis (sb_readonly=0)
Jan 22 14:11:09 compute-2 ovn_controller[133156]: 2026-01-22T14:11:09Z|00040|binding|INFO|Setting lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 down in Southbound
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.707 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 ovn_controller[133156]: 2026-01-22T14:11:09Z|00041|binding|INFO|Removing iface tap3fe867d7-5e ovn-installed in OVS
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.710 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.726 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], port_security=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.8/26 fdfe:381f:8400::3c7/64', 'neutron:device_id': '2314cf64-76a5-4383-8f2e-58228261f71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=3fe867d7-5ecf-4683-85f1-5f2bdce33a78) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:11:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.727 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.728 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.729 143497 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18c81f01-33be-49a1-a179-aecc87794f99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 22 14:11:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.730 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcfcc19-6825-4935-9467-7e7bb3ad4925]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.732 143497 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 namespace which is not needed anymore
Jan 22 14:11:09 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 22 14:11:09 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 12.898s CPU time.
Jan 22 14:11:09 compute-2 systemd-machined[194970]: Machine qemu-1-instance-00000006 terminated.
Jan 22 14:11:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:09.823+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:09 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : haproxy version is 2.8.14-c23fe91
Jan 22 14:11:09 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : path to executable is /usr/sbin/haproxy
Jan 22 14:11:09 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [WARNING]  (237924) : Exiting Master process...
Jan 22 14:11:09 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [ALERT]    (237924) : Current worker (237926) exited with code 143 (Terminated)
Jan 22 14:11:09 compute-2 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [WARNING]  (237924) : All workers exited. Exiting... (0)
Jan 22 14:11:09 compute-2 systemd[1]: libpod-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7.scope: Deactivated successfully.
Jan 22 14:11:09 compute-2 podman[238116]: 2026-01-22 14:11:09.890640545 +0000 UTC m=+0.057498772 container died 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:11:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-39f1a7dbb9fbf437360a4b9755ab2b91a6644c636b82f1e3d91c08d6fa81b3c7-merged.mount: Deactivated successfully.
Jan 22 14:11:09 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7-userdata-shm.mount: Deactivated successfully.
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.925 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 podman[238116]: 2026-01-22 14:11:09.926671778 +0000 UTC m=+0.093529975 container cleanup 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.930 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 systemd[1]: libpod-conmon-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7.scope: Deactivated successfully.
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.938 226437 INFO nova.virt.libvirt.driver [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance destroyed successfully.
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.939 226437 DEBUG nova.objects.instance [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'resources' on Instance uuid 2314cf64-76a5-4383-8f2e-58228261f71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.963 226437 DEBUG nova.virt.libvirt.vif [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-2',id=6,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-22T14:10:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:10:56Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=2314cf64-76a5-4383-8f2e-58228261f71b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.963 226437 DEBUG nova.network.os_vif_util [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.964 226437 DEBUG nova.network.os_vif_util [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.964 226437 DEBUG os_vif [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.966 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.966 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fe867d7-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.967 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.969 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:09 compute-2 nova_compute[226433]: 2026-01-22 14:11:09.971 226437 INFO os_vif [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e')
Jan 22 14:11:09 compute-2 podman[238152]: 2026-01-22 14:11:09.996945278 +0000 UTC m=+0.049715947 container remove 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.002 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0c5d2f-5924-491f-88a8-765414ae7b65]: (4, ('Thu Jan 22 02:11:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 (7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7)\n7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7\nThu Jan 22 02:11:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 (7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7)\n7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.003 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[45535e34-96a6-4a20-b412-71b0f7f8a792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.004 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.006 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.018 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:10 compute-2 kernel: tap18c81f01-30: left promiscuous mode
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.020 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.022 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[d543cbe5-fe44-4cdd-9a11-295f3fa4a7a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.044 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[20fe42ec-5536-4de1-9926-ba462bea7edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.046 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b62390e2-5213-47c2-bc10-a8d39fb1c8b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.058 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[39ec3f00-fdb9-4f80-bfd6-090e1dbfb7ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490822, 'reachable_time': 37114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238190, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 systemd[1]: run-netns-ovnmeta\x2d18c81f01\x2d33be\x2d49a1\x2da179\x2daecc87794f99.mount: Deactivated successfully.
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.068 143856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 22 14:11:10 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.069 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[b7259e92-773a-499b-b50e-ed9694a97746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:11:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.451 226437 DEBUG nova.compute.manager [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-unplugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG oslo_concurrency.lockutils [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG oslo_concurrency.lockutils [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG oslo_concurrency.lockutils [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG nova.compute.manager [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] No waiting events found dispatching network-vif-unplugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG nova.compute.manager [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-unplugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.493 226437 INFO nova.virt.libvirt.driver [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deleting instance files /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b_del
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.493 226437 INFO nova.virt.libvirt.driver [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deletion of /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b_del complete
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.573 226437 INFO nova.compute.manager [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 1.67 seconds to destroy the instance on the hypervisor.
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.574 226437 DEBUG oslo.service.loopingcall [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.574 226437 DEBUG nova.compute.manager [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 22 14:11:10 compute-2 nova_compute[226433]: 2026-01-22 14:11:10.575 226437 DEBUG nova.network.neutron [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 22 14:11:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:10 compute-2 ceph-mon[77081]: pgmap v1293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 252 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 727 KiB/s wr, 190 op/s
Jan 22 14:11:10 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:10 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:10.796+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:11 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:11.760+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:11 compute-2 nova_compute[226433]: 2026-01-22 14:11:11.916 226437 DEBUG nova.network.neutron [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:11:11 compute-2 nova_compute[226433]: 2026-01-22 14:11:11.947 226437 INFO nova.compute.manager [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 1.37 seconds to deallocate network for instance.
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.042 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.042 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.178 226437 DEBUG oslo_concurrency.processutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.318 226437 DEBUG nova.compute.manager [req-51684e27-dc3b-4f8b-8975-aa9bdea9550b req-7ed5cb0e-153a-4b88-a447-1b37cd3d1cc7 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-deleted-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:11:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:12.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.585 226437 DEBUG nova.compute.manager [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.586 226437 DEBUG oslo_concurrency.lockutils [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.586 226437 DEBUG oslo_concurrency.lockutils [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.586 226437 DEBUG oslo_concurrency.lockutils [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.587 226437 DEBUG nova.compute.manager [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] No waiting events found dispatching network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.587 226437 WARNING nova.compute.manager [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received unexpected event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for instance with vm_state deleted and task_state None.
Jan 22 14:11:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:11:12 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4163491815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.605 226437 DEBUG oslo_concurrency.processutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.609 226437 DEBUG nova.compute.provider_tree [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.631 226437 DEBUG nova.scheduler.client.report [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.669 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.707 226437 INFO nova.scheduler.client.report [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Deleted allocations for instance 2314cf64-76a5-4383-8f2e-58228261f71b
Jan 22 14:11:12 compute-2 ceph-mon[77081]: pgmap v1294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 247 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Jan 22 14:11:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:12 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4163491815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:12.762+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:12 compute-2 nova_compute[226433]: 2026-01-22 14:11:12.826 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:13 compute-2 nova_compute[226433]: 2026-01-22 14:11:13.424 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:13.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:13 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:14.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:14.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:14.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:14 compute-2 ceph-mon[77081]: pgmap v1295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 247 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 144 KiB/s rd, 1.1 MiB/s wr, 59 op/s
Jan 22 14:11:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:14 compute-2 nova_compute[226433]: 2026-01-22 14:11:14.970 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:15.587 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:11:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:15.588 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:11:15 compute-2 nova_compute[226433]: 2026-01-22 14:11:15.588 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:15.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:15 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:15 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:16.715+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:16 compute-2 ceph-mon[77081]: pgmap v1296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 170 KiB/s rd, 1.7 MiB/s wr, 99 op/s
Jan 22 14:11:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:16 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1231319645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:17.737+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:17 compute-2 nova_compute[226433]: 2026-01-22 14:11:17.994 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769091062.9939866, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:11:17 compute-2 nova_compute[226433]: 2026-01-22 14:11:17.995 226437 INFO nova.compute.manager [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Stopped (Lifecycle Event)
Jan 22 14:11:18 compute-2 nova_compute[226433]: 2026-01-22 14:11:18.083 226437 DEBUG nova.compute.manager [None req-f6315c38-80c4-4dec-86b4-db8b117b2dcd - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:11:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:18 compute-2 nova_compute[226433]: 2026-01-22 14:11:18.427 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:18.708+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:19 compute-2 ceph-mon[77081]: pgmap v1297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 157 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 1.6 MiB/s wr, 69 op/s
Jan 22 14:11:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3325865220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:11:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3325865220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:11:19 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:19.590 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:11:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:19.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:19 compute-2 nova_compute[226433]: 2026-01-22 14:11:19.973 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:20 compute-2 podman[238221]: 2026-01-22 14:11:20.017230778 +0000 UTC m=+0.065390251 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 14:11:20 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:20 compute-2 ceph-mon[77081]: pgmap v1298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.6 MiB/s wr, 76 op/s
Jan 22 14:11:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:20.689+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:21 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:21.693+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:22 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:22 compute-2 ceph-mon[77081]: pgmap v1299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.6 MiB/s wr, 76 op/s
Jan 22 14:11:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:22.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:23 compute-2 nova_compute[226433]: 2026-01-22 14:11:23.429 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:23.704+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:24.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:24 compute-2 ceph-mon[77081]: pgmap v1300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 565 KiB/s wr, 46 op/s
Jan 22 14:11:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:24 compute-2 nova_compute[226433]: 2026-01-22 14:11:24.937 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769091069.9359367, 2314cf64-76a5-4383-8f2e-58228261f71b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:11:24 compute-2 nova_compute[226433]: 2026-01-22 14:11:24.937 226437 INFO nova.compute.manager [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Stopped (Lifecycle Event)
Jan 22 14:11:24 compute-2 nova_compute[226433]: 2026-01-22 14:11:24.959 226437 DEBUG nova.compute.manager [None req-80fa0f9f-2d47-4e76-8496-6222328ab9a1 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:11:24 compute-2 nova_compute[226433]: 2026-01-22 14:11:24.975 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:25 compute-2 nova_compute[226433]: 2026-01-22 14:11:25.591 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:25.638+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:25 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:26.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:26.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:26 compute-2 ceph-mon[77081]: pgmap v1301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 565 KiB/s wr, 46 op/s
Jan 22 14:11:26 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:27.594+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:28 compute-2 nova_compute[226433]: 2026-01-22 14:11:28.430 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:28.594+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:28 compute-2 sudo[238245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:28 compute-2 sudo[238245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:28 compute-2 sudo[238245]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:28 compute-2 sudo[238270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:28 compute-2 sudo[238270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:28 compute-2 sudo[238270]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:28 compute-2 ceph-mon[77081]: pgmap v1302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Jan 22 14:11:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:29.568+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:29 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:29 compute-2 nova_compute[226433]: 2026-01-22 14:11:29.977 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:30.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:30.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:30 compute-2 ceph-mon[77081]: pgmap v1303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Jan 22 14:11:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:30 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:31.543+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:32.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:33 compute-2 ceph-mon[77081]: pgmap v1304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:33 compute-2 nova_compute[226433]: 2026-01-22 14:11:33.432 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:33 compute-2 nova_compute[226433]: 2026-01-22 14:11:33.546 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:33.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:34 compute-2 ceph-mon[77081]: pgmap v1305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:34.554+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:34 compute-2 nova_compute[226433]: 2026-01-22 14:11:34.980 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:35.601+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.039 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "f591d61b-712e-49aa-85bd-8d222b607eb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.039 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "f591d61b-712e-49aa-85bd-8d222b607eb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.066 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:11:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:36 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:36 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:36 compute-2 ceph-mon[77081]: pgmap v1306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.176 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.177 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.186 226437 DEBUG nova.virt.hardware [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.186 226437 INFO nova.compute.claims [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.390 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.415 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.415 226437 DEBUG nova.compute.provider_tree [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:11:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:36.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.460 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.507 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:36.554+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:36 compute-2 nova_compute[226433]: 2026-01-22 14:11:36.575 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:11:36 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/744848052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.005 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.012 226437 DEBUG nova.compute.provider_tree [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.040 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.074 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.075 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.155 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 22 14:11:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/744848052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.179 226437 INFO nova.virt.libvirt.driver [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.219 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.375 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.377 226437 DEBUG nova.virt.libvirt.driver [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.377 226437 INFO nova.virt.libvirt.driver [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Creating image(s)
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.416 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.453 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.491 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.495 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.581 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.582 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.583 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.583 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:37.589+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.617 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:11:37 compute-2 nova_compute[226433]: 2026-01-22 14:11:37.621 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 f591d61b-712e-49aa-85bd-8d222b607eb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:38 compute-2 podman[238415]: 2026-01-22 14:11:38.066152565 +0000 UTC m=+0.118771223 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:11:38 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:38 compute-2 ceph-mon[77081]: pgmap v1307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:11:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3987265266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.435 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:11:38 compute-2 nova_compute[226433]: 2026-01-22 14:11:38.547 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:38.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:11:39 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1569152983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.606 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.607 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.608 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.608 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:39.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:39 compute-2 nova_compute[226433]: 2026-01-22 14:11:39.982 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:11:40 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2149985450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.053 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.257 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.259 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4808MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.259 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.259 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:40 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:40 compute-2 ceph-mon[77081]: pgmap v1308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 126 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Jan 22 14:11:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2149985450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.429 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.430 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.430 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.431 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:11:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.496 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:40.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:11:40 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/32100446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.967 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:11:40 compute-2 nova_compute[226433]: 2026-01-22 14:11:40.975 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:11:41 compute-2 nova_compute[226433]: 2026-01-22 14:11:41.014 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:11:41 compute-2 nova_compute[226433]: 2026-01-22 14:11:41.071 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:11:41 compute-2 nova_compute[226433]: 2026-01-22 14:11:41.072 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:41 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:41 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/32100446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:41.620+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:42 compute-2 nova_compute[226433]: 2026-01-22 14:11:42.067 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:42 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:42 compute-2 ceph-mon[77081]: pgmap v1309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 22 14:11:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/433737356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:42 compute-2 nova_compute[226433]: 2026-01-22 14:11:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:42.670+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:43 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:43 compute-2 nova_compute[226433]: 2026-01-22 14:11:43.437 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:43.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:44 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:44 compute-2 ceph-mon[77081]: pgmap v1310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 22 14:11:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3018484842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:11:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:44.603+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:11:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:11:44 compute-2 nova_compute[226433]: 2026-01-22 14:11:44.985 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:45 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4002379441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:11:45 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.476589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105476628, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2475, "num_deletes": 251, "total_data_size": 4802638, "memory_usage": 4879512, "flush_reason": "Manual Compaction"}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105498290, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 3142489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34865, "largest_seqno": 37335, "table_properties": {"data_size": 3133257, "index_size": 5342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23921, "raw_average_key_size": 21, "raw_value_size": 3112819, "raw_average_value_size": 2796, "num_data_blocks": 230, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090934, "oldest_key_time": 1769090934, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 21873 microseconds, and 7570 cpu microseconds.
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498455) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 3142489 bytes OK
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498486) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500920) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500946) EVENT_LOG_v1 {"time_micros": 1769091105500938, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 4791386, prev total WAL file size 4791386, number of live WAL files 2.
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.503263) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(3068KB)], [66(7663KB)]
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105503296, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10989740, "oldest_snapshot_seqno": -1}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 7643 keys, 9277398 bytes, temperature: kUnknown
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105569524, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9277398, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9231769, "index_size": 25421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 203073, "raw_average_key_size": 26, "raw_value_size": 9097649, "raw_average_value_size": 1190, "num_data_blocks": 983, "num_entries": 7643, "num_filter_entries": 7643, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.569822) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9277398 bytes
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.571657) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.6 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8158, records dropped: 515 output_compression: NoCompression
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.571677) EVENT_LOG_v1 {"time_micros": 1769091105571668, "job": 40, "event": "compaction_finished", "compaction_time_micros": 66348, "compaction_time_cpu_micros": 30275, "output_level": 6, "num_output_files": 1, "total_output_size": 9277398, "num_input_records": 8158, "num_output_records": 7643, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105572246, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105573454, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.503209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:11:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:45.615+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:46 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:46 compute-2 ceph-mon[77081]: pgmap v1311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 3.5 MiB/s wr, 42 op/s
Jan 22 14:11:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:46.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:47.182 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:47.183 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:11:47.183 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:11:47 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:47.559+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:48 compute-2 nova_compute[226433]: 2026-01-22 14:11:48.439 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:48 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:48 compute-2 ceph-mon[77081]: pgmap v1312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 316 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 3.5 MiB/s wr, 42 op/s
Jan 22 14:11:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:48.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:11:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:48.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:11:49 compute-2 sudo[238490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:49 compute-2 sudo[238490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:49 compute-2 sudo[238490]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:49 compute-2 sudo[238515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:49 compute-2 sudo[238515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:49 compute-2 sudo[238515]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:49 compute-2 nova_compute[226433]: 2026-01-22 14:11:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:11:49 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:49.616+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:49 compute-2 nova_compute[226433]: 2026-01-22 14:11:49.987 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:50 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:50 compute-2 ceph-mon[77081]: pgmap v1313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.5 MiB/s wr, 91 op/s
Jan 22 14:11:50 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:50.617+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:51 compute-2 podman[238541]: 2026-01-22 14:11:51.009445204 +0000 UTC m=+0.066658495 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:11:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:51 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:51 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:51.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:52 compute-2 ceph-mon[77081]: pgmap v1314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 22 14:11:52 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:52.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:52.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:53 compute-2 nova_compute[226433]: 2026-01-22 14:11:53.442 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:53 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:53.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:54 compute-2 ceph-mon[77081]: pgmap v1315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 215 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:11:54 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2093419999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:11:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:54.651+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:54 compute-2 nova_compute[226433]: 2026-01-22 14:11:54.990 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:55 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:55 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:11:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:55.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:11:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:56.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:56 compute-2 ceph-mon[77081]: pgmap v1316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 22 14:11:56 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:57 compute-2 ovn_controller[133156]: 2026-01-22T14:11:57Z|00042|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 22 14:11:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:57.612+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:57 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:57 compute-2 sudo[238563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:57 compute-2 sudo[238563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:57 compute-2 sudo[238563]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-2 sudo[238588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:11:58 compute-2 sudo[238588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:58 compute-2 sudo[238588]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-2 sudo[238613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:11:58 compute-2 sudo[238613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:58 compute-2 sudo[238613]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-2 sudo[238638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:11:58 compute-2 sudo[238638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:11:58 compute-2 nova_compute[226433]: 2026-01-22 14:11:58.444 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:11:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:11:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:11:58 compute-2 sudo[238638]: pam_unix(sudo:session): session closed for user root
Jan 22 14:11:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:58.655+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:11:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:11:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:58.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:11:58 compute-2 ceph-mon[77081]: pgmap v1317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 14:11:58 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.508 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.509 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.540 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.631 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.631 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.638 226437 DEBUG nova.virt.hardware [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.638 226437 INFO nova.compute.claims [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:11:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:59.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:11:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:59 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:11:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:11:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:11:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:11:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:11:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:11:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.948 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:11:59 compute-2 nova_compute[226433]: 2026-01-22 14:11:59.994 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:12:00 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1699627580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.371 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.377 226437 DEBUG nova.compute.provider_tree [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.436 226437 DEBUG nova.scheduler.client.report [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.465 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.466 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:12:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.558 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.558 226437 DEBUG nova.network.neutron [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.590 226437 INFO nova.virt.libvirt.driver [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.643 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:12:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:00.686+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.850 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.851 226437 DEBUG nova.virt.libvirt.driver [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.852 226437 INFO nova.virt.libvirt.driver [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Creating image(s)
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.885 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.922 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.953 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:12:00 compute-2 nova_compute[226433]: 2026-01-22 14:12:00.958 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:12:00 compute-2 ceph-mon[77081]: pgmap v1318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 22 14:12:00 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:00 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1699627580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:00 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.011 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.012 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.013 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.013 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.043 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.047 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:12:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.614 226437 DEBUG nova.network.neutron [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 22 14:12:01 compute-2 nova_compute[226433]: 2026-01-22 14:12:01.614 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:12:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:01.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:02 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:02.725+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:03 compute-2 ceph-mon[77081]: pgmap v1319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 589 KiB/s rd, 13 KiB/s wr, 51 op/s
Jan 22 14:12:03 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:03 compute-2 nova_compute[226433]: 2026-01-22 14:12:03.487 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:03.773+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:04 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:04 compute-2 ceph-mon[77081]: pgmap v1320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 169 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:12:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:04.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:04.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:04 compute-2 nova_compute[226433]: 2026-01-22 14:12:04.998 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:05 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:05.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:06.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:06 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:06 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:06 compute-2 ceph-mon[77081]: pgmap v1321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 22 14:12:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:06.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:06.814+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:06 compute-2 sudo[238816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:06 compute-2 sudo[238816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:06 compute-2 sudo[238816]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:07 compute-2 sudo[238841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:12:07 compute-2 sudo[238841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:07 compute-2 sudo[238841]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:07 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:12:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:12:07 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:12:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:07.807+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:08 compute-2 nova_compute[226433]: 2026-01-22 14:12:08.536 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:08 compute-2 ceph-mon[77081]: pgmap v1322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:08 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:08.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:09 compute-2 podman[238867]: 2026-01-22 14:12:09.034566262 +0000 UTC m=+0.094370958 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 14:12:09 compute-2 sudo[238894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:09 compute-2 sudo[238894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:09 compute-2 sudo[238894]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:09 compute-2 sudo[238919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:09 compute-2 sudo[238919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:09 compute-2 sudo[238919]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:09 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:09.751+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:10 compute-2 nova_compute[226433]: 2026-01-22 14:12:09.999 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:10 compute-2 ceph-mon[77081]: pgmap v1323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:10 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:10 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:10.744+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:11 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:11.785+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:12.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:12 compute-2 ceph-mon[77081]: pgmap v1324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:12 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:12.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:13 compute-2 nova_compute[226433]: 2026-01-22 14:12:13.537 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:13 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:13.770+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:14 compute-2 ceph-mon[77081]: pgmap v1325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:14 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:14.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:15 compute-2 nova_compute[226433]: 2026-01-22 14:12:15.003 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:15 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:15 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:15.850+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:16.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:16.834+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:17 compute-2 ceph-mon[77081]: pgmap v1326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:12:17 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:17.799+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:18 compute-2 nova_compute[226433]: 2026-01-22 14:12:18.540 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:18.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:18 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:18 compute-2 ceph-mon[77081]: pgmap v1327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:12:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3872109376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:12:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:18.832+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:19 compute-2 sshd-session[238949]: Invalid user solana from 45.148.10.240 port 59338
Jan 22 14:12:19 compute-2 sshd-session[238949]: Connection closed by invalid user solana 45.148.10.240 port 59338 [preauth]
Jan 22 14:12:19 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:19 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:19.800+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:20 compute-2 nova_compute[226433]: 2026-01-22 14:12:20.005 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:20.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:20.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:21 compute-2 ceph-mon[77081]: pgmap v1328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:21 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:21 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:21.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:22 compute-2 podman[238952]: 2026-01-22 14:12:22.045178593 +0000 UTC m=+0.096979347 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 14:12:22 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:22 compute-2 ceph-mon[77081]: pgmap v1329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:22.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:22.736+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:23 compute-2 nova_compute[226433]: 2026-01-22 14:12:23.543 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:23.693+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:23 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:23 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1692563106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:24.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:24 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:24 compute-2 ceph-mon[77081]: pgmap v1330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:24 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:25 compute-2 nova_compute[226433]: 2026-01-22 14:12:25.008 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:25.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:25.632+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:25 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:25 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:26.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:26.634+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:27 compute-2 ceph-mon[77081]: pgmap v1331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:27 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:27.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:27.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:28 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:28 compute-2 nova_compute[226433]: 2026-01-22 14:12:28.546 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:28.652+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:29.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:29 compute-2 ceph-mon[77081]: pgmap v1332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 211 MiB data, 328 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:29 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:29.646+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:29 compute-2 sudo[238975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:29 compute-2 sudo[238975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:29 compute-2 sudo[238975]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:29 compute-2 sudo[239000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:29 compute-2 sudo[239000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:29 compute-2 sudo[239000]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:30 compute-2 nova_compute[226433]: 2026-01-22 14:12:30.011 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:30.628+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:31 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:31 compute-2 ceph-mon[77081]: pgmap v1333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 223 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 391 KiB/s wr, 1 op/s
Jan 22 14:12:31 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:31 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:31.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:31.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:32 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 14:12:32 compute-2 ceph-mon[77081]: pgmap v1334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:32.564+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:33.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:33 compute-2 nova_compute[226433]: 2026-01-22 14:12:33.582 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:33.598+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:34 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:34 compute-2 nova_compute[226433]: 2026-01-22 14:12:34.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:34.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:35 compute-2 nova_compute[226433]: 2026-01-22 14:12:35.014 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:35 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:35 compute-2 ceph-mon[77081]: pgmap v1335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:35 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:35.592+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:36 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:36 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:36 compute-2 ceph-mon[77081]: pgmap v1336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:36.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:37 compute-2 nova_compute[226433]: 2026-01-22 14:12:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:37 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:37.555+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:38 compute-2 nova_compute[226433]: 2026-01-22 14:12:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:38.518+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:38 compute-2 nova_compute[226433]: 2026-01-22 14:12:38.584 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:38 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:38 compute-2 ceph-mon[77081]: pgmap v1337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:38 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:38 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:12:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:39.547+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:39.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:39 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.907 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.935 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:12:39 compute-2 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.016 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:40 compute-2 podman[239030]: 2026-01-22 14:12:40.049033687 +0000 UTC m=+0.108598124 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 22 14:12:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:12:40 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2193478577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.361 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:12:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:40.522+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.544 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.545 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4771MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.546 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.546 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.640 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.640 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.640 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.641 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:12:40 compute-2 nova_compute[226433]: 2026-01-22 14:12:40.641 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:12:41 compute-2 ceph-mon[77081]: pgmap v1338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 22 14:12:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4207568101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2193478577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:41 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:41 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:41 compute-2 nova_compute[226433]: 2026-01-22 14:12:41.117 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:12:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:41.490+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:12:41 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3654665343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:41 compute-2 nova_compute[226433]: 2026-01-22 14:12:41.522 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:12:41 compute-2 nova_compute[226433]: 2026-01-22 14:12:41.527 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:12:41 compute-2 nova_compute[226433]: 2026-01-22 14:12:41.542 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:12:41 compute-2 nova_compute[226433]: 2026-01-22 14:12:41.561 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:12:41 compute-2 nova_compute[226433]: 2026-01-22 14:12:41.562 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:12:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:41.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3007474767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:42 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3654665343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:12:42 compute-2 nova_compute[226433]: 2026-01-22 14:12:42.169 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:42 compute-2 nova_compute[226433]: 2026-01-22 14:12:42.170 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:42 compute-2 nova_compute[226433]: 2026-01-22 14:12:42.170 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:42 compute-2 nova_compute[226433]: 2026-01-22 14:12:42.170 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:12:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:42.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:42.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:43 compute-2 ceph-mon[77081]: pgmap v1339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 14:12:43 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:43.463+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:43 compute-2 nova_compute[226433]: 2026-01-22 14:12:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:12:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:43 compute-2 nova_compute[226433]: 2026-01-22 14:12:43.586 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:44 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:44 compute-2 ceph-mon[77081]: pgmap v1340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:44.419+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:45 compute-2 nova_compute[226433]: 2026-01-22 14:12:45.020 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:45 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:45.469+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:45.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:46 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:46 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:46 compute-2 ceph-mon[77081]: pgmap v1341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:46.486+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:47 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:12:47.183 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:12:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:12:47.184 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:12:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:12:47.184 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:12:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:47.527+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:47.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:48 compute-2 ceph-mon[77081]: pgmap v1342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:48 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:48.482+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:48 compute-2 nova_compute[226433]: 2026-01-22 14:12:48.589 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:49 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:49.442+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:49 compute-2 sudo[239109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:49 compute-2 sudo[239109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:49 compute-2 sudo[239109]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:49 compute-2 sudo[239134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:12:49 compute-2 sudo[239134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:12:49 compute-2 sudo[239134]: pam_unix(sudo:session): session closed for user root
Jan 22 14:12:50 compute-2 nova_compute[226433]: 2026-01-22 14:12:50.024 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:50 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:50 compute-2 ceph-mon[77081]: pgmap v1343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:50.429+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:51 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:51 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2157 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:51.423+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:51.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:52 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:52 compute-2 ceph-mon[77081]: pgmap v1344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:52.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:52 compute-2 podman[239161]: 2026-01-22 14:12:52.993125107 +0000 UTC m=+0.057681827 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 14:12:53 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:53.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:53.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:53 compute-2 nova_compute[226433]: 2026-01-22 14:12:53.590 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:54 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:54 compute-2 ceph-mon[77081]: pgmap v1345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:54.452+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:12:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:12:54 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 14:12:55 compute-2 nova_compute[226433]: 2026-01-22 14:12:55.028 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:55 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:55.403+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:55.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:12:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:56.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:56 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:56 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2162 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:12:56 compute-2 ceph-mon[77081]: pgmap v1346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:57.455+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:57 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:57 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 14:12:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:12:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:12:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:58.450+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:12:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:58 compute-2 nova_compute[226433]: 2026-01-22 14:12:58.593 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:12:58 compute-2 ceph-mon[77081]: pgmap v1347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:12:58 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:12:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:59.436+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:12:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:12:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:12:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:12:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:59.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:12:59 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:00 compute-2 nova_compute[226433]: 2026-01-22 14:13:00.035 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:00.427+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:00 compute-2 ceph-mon[77081]: pgmap v1348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:00 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:00 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2167 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:01.468+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:02 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:02.448+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:03 compute-2 ceph-mon[77081]: pgmap v1349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:03 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:03.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:03.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:03 compute-2 nova_compute[226433]: 2026-01-22 14:13:03.594 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:04 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:04.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:05 compute-2 nova_compute[226433]: 2026-01-22 14:13:05.038 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:05.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:05 compute-2 ceph-mon[77081]: pgmap v1350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:05 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:05.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:06.421+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:06 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:06 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2172 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:06 compute-2 ceph-mon[77081]: pgmap v1351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:07 compute-2 sudo[239188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:07 compute-2 sudo[239188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-2 sudo[239188]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-2 sudo[239213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:13:07 compute-2 sudo[239213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-2 sudo[239213]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-2 sudo[239238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:07 compute-2 sudo[239238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-2 sudo[239238]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-2 sudo[239263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:13:07 compute-2 sudo[239263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:07.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:07.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:07 compute-2 sudo[239263]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:07 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:08.348+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:08.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:08 compute-2 nova_compute[226433]: 2026-01-22 14:13:08.598 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:08 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:08 compute-2 ceph-mon[77081]: pgmap v1352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:08 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:09.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:09.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:10 compute-2 sudo[239319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:10 compute-2 sudo[239319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:10 compute-2 sudo[239319]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:10 compute-2 nova_compute[226433]: 2026-01-22 14:13:10.040 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:10 compute-2 sudo[239344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:10 compute-2 sudo[239344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:10 compute-2 sudo[239344]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:10 compute-2 podman[239368]: 2026-01-22 14:13:10.215126226 +0000 UTC m=+0.095242441 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 14:13:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:10.403+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:10.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:10 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:10 compute-2 ceph-mon[77081]: pgmap v1353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:13:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:13:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:11.354+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:11.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:11 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:11 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2177 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:13:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:13:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:13:11 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:12.348+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:13 compute-2 ceph-mon[77081]: pgmap v1354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:13 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:13.387+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:13 compute-2 nova_compute[226433]: 2026-01-22 14:13:13.600 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:14 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:14.351+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:14.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:15 compute-2 nova_compute[226433]: 2026-01-22 14:13:15.043 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:15 compute-2 ceph-mon[77081]: pgmap v1355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:15 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:15.304+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.775698) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195775774, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1416, "num_deletes": 256, "total_data_size": 2586040, "memory_usage": 2614016, "flush_reason": "Manual Compaction"}
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195802539, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1687738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37340, "largest_seqno": 38751, "table_properties": {"data_size": 1682028, "index_size": 2850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14642, "raw_average_key_size": 20, "raw_value_size": 1669526, "raw_average_value_size": 2351, "num_data_blocks": 124, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091106, "oldest_key_time": 1769091106, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 26889 microseconds, and 11408 cpu microseconds.
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.802593) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1687738 bytes OK
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.802615) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.850271) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.850339) EVENT_LOG_v1 {"time_micros": 1769091195850304, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.850361) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2579160, prev total WAL file size 2579160, number of live WAL files 2.
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.851221) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1648KB)], [69(9059KB)]
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195851360, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 10965136, "oldest_snapshot_seqno": -1}
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7828 keys, 10801446 bytes, temperature: kUnknown
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195995076, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 10801446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10753256, "index_size": 27534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 208536, "raw_average_key_size": 26, "raw_value_size": 10614504, "raw_average_value_size": 1355, "num_data_blocks": 1068, "num_entries": 7828, "num_filter_entries": 7828, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.995522) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10801446 bytes
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.998210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.2 rd, 75.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 8353, records dropped: 525 output_compression: NoCompression
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.998279) EVENT_LOG_v1 {"time_micros": 1769091195998257, "job": 42, "event": "compaction_finished", "compaction_time_micros": 143813, "compaction_time_cpu_micros": 47953, "output_level": 6, "num_output_files": 1, "total_output_size": 10801446, "num_input_records": 8353, "num_output_records": 7828, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:13:15 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195998946, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091196001091, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.851070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:13:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:16.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:16 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:16 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2182 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:16 compute-2 ceph-mon[77081]: pgmap v1356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:17.272+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:17 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:17.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:17 compute-2 sudo[239401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:17 compute-2 sudo[239401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:17 compute-2 sudo[239401]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:17 compute-2 sudo[239426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:13:17 compute-2 sudo[239426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:17 compute-2 sudo[239426]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:18.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:18 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:13:18 compute-2 ceph-mon[77081]: pgmap v1357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/379890725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:13:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/379890725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:13:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:18 compute-2 nova_compute[226433]: 2026-01-22 14:13:18.601 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:19.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:19 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:20 compute-2 nova_compute[226433]: 2026-01-22 14:13:20.047 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:20.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:20 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:20 compute-2 ceph-mon[77081]: pgmap v1358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:21.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:21 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:21 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2187 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:22.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:22 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:22 compute-2 ceph-mon[77081]: pgmap v1359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:23.362+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:23 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:23 compute-2 nova_compute[226433]: 2026-01-22 14:13:23.603 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:23.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:24 compute-2 podman[239454]: 2026-01-22 14:13:24.012667448 +0000 UTC m=+0.063764488 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:13:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:24.408+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:24 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:24 compute-2 ceph-mon[77081]: pgmap v1360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:25 compute-2 nova_compute[226433]: 2026-01-22 14:13:25.050 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:25.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:25 compute-2 sshd-session[239474]: Connection closed by authenticating user root 92.118.39.95 port 54254 [preauth]
Jan 22 14:13:25 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:25 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2192 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:25.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:26.385+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:26 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:26 compute-2 ceph-mon[77081]: pgmap v1361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:26 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:27.372+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:27.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:27 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:28.362+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:28 compute-2 nova_compute[226433]: 2026-01-22 14:13:28.604 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:28 compute-2 ceph-mon[77081]: pgmap v1362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:28 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:29.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:29 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:30 compute-2 nova_compute[226433]: 2026-01-22 14:13:30.052 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:30 compute-2 sudo[239478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:30 compute-2 sudo[239478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:30 compute-2 sudo[239478]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:30.426+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:13:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:13:30 compute-2 ceph-mon[77081]: pgmap v1363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:30 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:30 compute-2 sudo[239504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:30 compute-2 sudo[239504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:30 compute-2 sudo[239504]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:31.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:31 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:31 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:32.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:32 compute-2 ceph-mon[77081]: pgmap v1364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:32 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:33.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:33 compute-2 nova_compute[226433]: 2026-01-22 14:13:33.607 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:33 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:34.344+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:34 compute-2 nova_compute[226433]: 2026-01-22 14:13:34.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:34 compute-2 ceph-mon[77081]: pgmap v1365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:34 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:35 compute-2 nova_compute[226433]: 2026-01-22 14:13:35.055 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:35.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:13:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:35.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:13:35 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:35 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:36.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:36 compute-2 ceph-mon[77081]: pgmap v1366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:36 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:37.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:37 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:38.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:38 compute-2 nova_compute[226433]: 2026-01-22 14:13:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:38 compute-2 nova_compute[226433]: 2026-01-22 14:13:38.609 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:38 compute-2 ceph-mon[77081]: pgmap v1367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:38 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:39.374+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.579 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.579 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.580 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.580 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.581 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:13:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:39 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:13:39 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/914354036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:39 compute-2 nova_compute[226433]: 2026-01-22 14:13:39.984 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.058 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.160 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.161 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4781MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.161 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.161 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:13:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:40.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.440 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.514 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:13:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:13:40 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3392952699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.923 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.929 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:13:40 compute-2 ceph-mon[77081]: pgmap v1368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/914354036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:40 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:40 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1473123618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3392952699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.963 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.966 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:13:40 compute-2 nova_compute[226433]: 2026-01-22 14:13:40.966 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:13:41 compute-2 podman[239578]: 2026-01-22 14:13:41.025665757 +0000 UTC m=+0.087322391 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:13:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:41.350+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:41.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:41 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2960319933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:13:41 compute-2 nova_compute[226433]: 2026-01-22 14:13:41.968 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:41 compute-2 nova_compute[226433]: 2026-01-22 14:13:41.969 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:41 compute-2 nova_compute[226433]: 2026-01-22 14:13:41.969 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:13:41 compute-2 nova_compute[226433]: 2026-01-22 14:13:41.969 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.018 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.018 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:42.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:42 compute-2 nova_compute[226433]: 2026-01-22 14:13:42.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:13:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:42 compute-2 ceph-mon[77081]: pgmap v1369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:42 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:43.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:43.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:43 compute-2 nova_compute[226433]: 2026-01-22 14:13:43.646 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:44 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:44.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:44 compute-2 nova_compute[226433]: 2026-01-22 14:13:44.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:45 compute-2 nova_compute[226433]: 2026-01-22 14:13:45.061 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:45 compute-2 ceph-mon[77081]: pgmap v1370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:45 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:45.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:45.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:46 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:46 compute-2 ceph-mon[77081]: pgmap v1371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:46 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:46.384+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:46.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:13:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:13:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:13:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:13:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:13:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:13:47 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:47.370+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:47.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:48 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:48 compute-2 ceph-mon[77081]: pgmap v1372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:48.355+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:48 compute-2 nova_compute[226433]: 2026-01-22 14:13:48.647 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:49 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:49.345+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:49.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:50 compute-2 nova_compute[226433]: 2026-01-22 14:13:50.063 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:50.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:50 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:50 compute-2 ceph-mon[77081]: pgmap v1373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:50 compute-2 sudo[239608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:50 compute-2 sudo[239608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:50 compute-2 sudo[239608]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:50 compute-2 sudo[239633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:13:50 compute-2 sudo[239633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:13:50 compute-2 sudo[239633]: pam_unix(sudo:session): session closed for user root
Jan 22 14:13:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:51.329+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:51 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:51 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:51.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:52.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:52 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:52 compute-2 ceph-mon[77081]: pgmap v1374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:52.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:53.378+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:53 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:53.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:53 compute-2 nova_compute[226433]: 2026-01-22 14:13:53.650 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:54.329+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:54 compute-2 nova_compute[226433]: 2026-01-22 14:13:54.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:13:54 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:54 compute-2 ceph-mon[77081]: pgmap v1375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:54 compute-2 podman[239660]: 2026-01-22 14:13:54.979009418 +0000 UTC m=+0.044088604 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:13:55 compute-2 nova_compute[226433]: 2026-01-22 14:13:55.066 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:55.284+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:55.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:55 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:55 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:55 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:13:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:13:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:56.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:56 compute-2 ceph-mon[77081]: pgmap v1376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:56 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:56.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:57.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:57 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:58.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:58 compute-2 nova_compute[226433]: 2026-01-22 14:13:58.651 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:13:58 compute-2 ceph-mon[77081]: pgmap v1377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:13:58 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:13:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:58.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:13:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:59.264+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:13:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:13:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:13:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:13:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:13:59 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:00 compute-2 nova_compute[226433]: 2026-01-22 14:14:00.069 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:00.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:00 compute-2 ceph-mon[77081]: pgmap v1378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:00 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:00 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:01.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:01 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:02.203+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:02 compute-2 ceph-mon[77081]: pgmap v1379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:02 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:03.157+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:03 compute-2 nova_compute[226433]: 2026-01-22 14:14:03.654 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:03.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:03 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:04.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:05 compute-2 nova_compute[226433]: 2026-01-22 14:14:05.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:05 compute-2 ceph-mon[77081]: pgmap v1380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:05 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:05.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:05.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:06 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:06 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:06.295+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:07 compute-2 ceph-mon[77081]: pgmap v1381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:07 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:07.344+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:07.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:08 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:08 compute-2 ceph-mon[77081]: pgmap v1382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:08.376+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:08 compute-2 nova_compute[226433]: 2026-01-22 14:14:08.656 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:09 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:09.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:09.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:10 compute-2 nova_compute[226433]: 2026-01-22 14:14:10.074 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:10 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:10 compute-2 ceph-mon[77081]: pgmap v1383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:10.404+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:11 compute-2 sudo[239687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:11 compute-2 sudo[239687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:11 compute-2 sudo[239687]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:11 compute-2 sudo[239713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:11 compute-2 sudo[239713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:11 compute-2 sudo[239713]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:11 compute-2 podman[239711]: 2026-01-22 14:14:11.253243481 +0000 UTC m=+0.112547467 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:14:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:11.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:11 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:11 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:12.479+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:12.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:12 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:12 compute-2 ceph-mon[77081]: pgmap v1384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:12 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:13.502+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:13 compute-2 nova_compute[226433]: 2026-01-22 14:14:13.659 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:13.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:13 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:14.462+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:14 compute-2 ceph-mon[77081]: pgmap v1385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:14 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:15 compute-2 nova_compute[226433]: 2026-01-22 14:14:15.076 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:15.442+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:15.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:15 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:15 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:16.420+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:17 compute-2 ceph-mon[77081]: pgmap v1386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:17 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:17.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:17.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:17 compute-2 sudo[239766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:17 compute-2 sudo[239766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:17 compute-2 sudo[239766]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:17 compute-2 sudo[239791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:14:17 compute-2 sudo[239791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:17 compute-2 sudo[239791]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:18 compute-2 sudo[239816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:18 compute-2 sudo[239816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:18 compute-2 sudo[239816]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:18 compute-2 sudo[239841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:14:18 compute-2 sudo[239841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:18 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:18.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:18 compute-2 sudo[239841]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:18 compute-2 nova_compute[226433]: 2026-01-22 14:14:18.661 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:19 compute-2 ceph-mon[77081]: pgmap v1387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3226480098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:14:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3226480098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:14:19 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:19.384+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:19.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:20 compute-2 nova_compute[226433]: 2026-01-22 14:14:20.078 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:20 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:20 compute-2 ceph-mon[77081]: pgmap v1388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:14:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:14:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:20.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:21 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:21 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:21.398+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:21.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:22 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:22 compute-2 ceph-mon[77081]: pgmap v1389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:22.441+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:23 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:23.408+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:23 compute-2 nova_compute[226433]: 2026-01-22 14:14:23.715 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:24 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:24 compute-2 ceph-mon[77081]: pgmap v1390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:24.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:25 compute-2 nova_compute[226433]: 2026-01-22 14:14:25.080 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:25 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:25.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:25 compute-2 podman[239901]: 2026-01-22 14:14:25.995278946 +0000 UTC m=+0.057464005 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:14:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:26 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:26 compute-2 ceph-mon[77081]: pgmap v1391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:26 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2252 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:14:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:26.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:26 compute-2 sudo[239920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:26 compute-2 sudo[239920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:26 compute-2 sudo[239920]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:26 compute-2 sudo[239945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:14:26 compute-2 sudo[239945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:26 compute-2 sudo[239945]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:27 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:27.376+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:28.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:28 compute-2 nova_compute[226433]: 2026-01-22 14:14:28.730 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:29 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:29 compute-2 ceph-mon[77081]: pgmap v1392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:29.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:30 compute-2 nova_compute[226433]: 2026-01-22 14:14:30.083 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:30 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:30 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:30 compute-2 ceph-mon[77081]: pgmap v1393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:30.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:31 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:31 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:31 compute-2 sudo[239973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:31 compute-2 sudo[239973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:31 compute-2 sudo[239973]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:31 compute-2 sudo[239998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:31 compute-2 sudo[239998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:31 compute-2 sudo[239998]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:31.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:14:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:31.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:14:32 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:32 compute-2 ceph-mon[77081]: pgmap v1394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:32.341+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:33 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.288579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273288606, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1309, "num_deletes": 251, "total_data_size": 2308696, "memory_usage": 2338600, "flush_reason": "Manual Compaction"}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273298785, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1505369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38756, "largest_seqno": 40060, "table_properties": {"data_size": 1500082, "index_size": 2555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13785, "raw_average_key_size": 20, "raw_value_size": 1488459, "raw_average_value_size": 2251, "num_data_blocks": 110, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091196, "oldest_key_time": 1769091196, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 10291 microseconds, and 4055 cpu microseconds.
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.298861) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1505369 bytes OK
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.298893) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.301455) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.301481) EVENT_LOG_v1 {"time_micros": 1769091273301473, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.301506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2302318, prev total WAL file size 2302318, number of live WAL files 2.
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1470KB)], [72(10MB)]
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273302761, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12306815, "oldest_snapshot_seqno": -1}
Jan 22 14:14:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:33.320+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 7972 keys, 10595837 bytes, temperature: kUnknown
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273376102, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10595837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10546997, "index_size": 27800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 212751, "raw_average_key_size": 26, "raw_value_size": 10405774, "raw_average_value_size": 1305, "num_data_blocks": 1075, "num_entries": 7972, "num_filter_entries": 7972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.376683) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10595837 bytes
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.379278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.4 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 8489, records dropped: 517 output_compression: NoCompression
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.379332) EVENT_LOG_v1 {"time_micros": 1769091273379295, "job": 44, "event": "compaction_finished", "compaction_time_micros": 73522, "compaction_time_cpu_micros": 30833, "output_level": 6, "num_output_files": 1, "total_output_size": 10595837, "num_input_records": 8489, "num_output_records": 7972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273379958, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273383876, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:14:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:33.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:33 compute-2 nova_compute[226433]: 2026-01-22 14:14:33.734 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:34.282+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:34 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:34 compute-2 ceph-mon[77081]: pgmap v1395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:34 compute-2 sshd-session[240024]: Invalid user pbanx from 45.148.10.240 port 45848
Jan 22 14:14:34 compute-2 sshd-session[240024]: Connection closed by invalid user pbanx 45.148.10.240 port 45848 [preauth]
Jan 22 14:14:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:35 compute-2 nova_compute[226433]: 2026-01-22 14:14:35.086 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:35.242+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:35 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:35 compute-2 nova_compute[226433]: 2026-01-22 14:14:35.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:35.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:36.292+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:36 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:36 compute-2 ceph-mon[77081]: pgmap v1396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:36 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2262 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:37.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:37 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:37.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:38.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:38 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:38 compute-2 ceph-mon[77081]: pgmap v1397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:38 compute-2 nova_compute[226433]: 2026-01-22 14:14:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:38 compute-2 nova_compute[226433]: 2026-01-22 14:14:38.734 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:39.301+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:39 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:40 compute-2 nova_compute[226433]: 2026-01-22 14:14:40.089 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:40.281+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:40 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:40 compute-2 ceph-mon[77081]: pgmap v1398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:40 compute-2 nova_compute[226433]: 2026-01-22 14:14:40.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:41.289+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:41 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:41 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.645 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.645 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.776 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.777 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.777 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.778 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:14:41 compute-2 nova_compute[226433]: 2026-01-22 14:14:41.779 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:14:42 compute-2 podman[240041]: 2026-01-22 14:14:42.073224279 +0000 UTC m=+0.131335255 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 14:14:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:14:42 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1519008089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.246 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:14:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:42.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.476 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.477 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4757MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.477 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.478 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:14:42 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:42 compute-2 ceph-mon[77081]: pgmap v1399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1519008089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.574 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:14:42 compute-2 nova_compute[226433]: 2026-01-22 14:14:42.656 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:14:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:14:43 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2614008316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:43 compute-2 nova_compute[226433]: 2026-01-22 14:14:43.113 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:14:43 compute-2 nova_compute[226433]: 2026-01-22 14:14:43.124 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:14:43 compute-2 nova_compute[226433]: 2026-01-22 14:14:43.151 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:14:43 compute-2 nova_compute[226433]: 2026-01-22 14:14:43.152 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:14:43 compute-2 nova_compute[226433]: 2026-01-22 14:14:43.152 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:14:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:43.378+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:43 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2997728244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2614008316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:43.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:43 compute-2 nova_compute[226433]: 2026-01-22 14:14:43.774 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:44 compute-2 nova_compute[226433]: 2026-01-22 14:14:44.147 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:44 compute-2 nova_compute[226433]: 2026-01-22 14:14:44.148 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:44 compute-2 nova_compute[226433]: 2026-01-22 14:14:44.148 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:14:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:44.363+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:44 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:44 compute-2 ceph-mon[77081]: pgmap v1400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2571695261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:14:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:45 compute-2 nova_compute[226433]: 2026-01-22 14:14:45.091 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:45.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:45 compute-2 nova_compute[226433]: 2026-01-22 14:14:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:14:45 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:45.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:46.368+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:46 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:46 compute-2 ceph-mon[77081]: pgmap v1401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:46 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:14:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:14:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:14:47.186 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:14:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:14:47.186 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:14:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:47.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:47 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:47.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:48.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:48 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:48 compute-2 ceph-mon[77081]: pgmap v1402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:48 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:48 compute-2 nova_compute[226433]: 2026-01-22 14:14:48.775 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:49.365+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:49 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:49.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:50 compute-2 nova_compute[226433]: 2026-01-22 14:14:50.094 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:50.367+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:50 compute-2 ceph-mon[77081]: pgmap v1403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:50 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:50 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2277 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:51.385+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:51 compute-2 sudo[240106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:51 compute-2 sudo[240106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:51 compute-2 sudo[240106]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:51 compute-2 sudo[240131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:14:51 compute-2 sudo[240131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:14:51 compute-2 sudo[240131]: pam_unix(sudo:session): session closed for user root
Jan 22 14:14:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:51.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:51 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:52.400+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:52 compute-2 ceph-mon[77081]: pgmap v1404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:52 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:53.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:53 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:53 compute-2 nova_compute[226433]: 2026-01-22 14:14:53.806 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:54.406+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:14:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:14:55 compute-2 nova_compute[226433]: 2026-01-22 14:14:55.096 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:55.433+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:55 compute-2 ceph-mon[77081]: pgmap v1405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:55 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:55.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:14:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:56.447+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:56 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:56 compute-2 ceph-mon[77081]: pgmap v1406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:56 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:14:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:14:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:14:56 compute-2 podman[240159]: 2026-01-22 14:14:56.985633443 +0000 UTC m=+0.050749444 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:14:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:57.473+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:57 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:57.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:58.445+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:58 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:58 compute-2 ceph-mon[77081]: pgmap v1407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:14:58 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:58 compute-2 nova_compute[226433]: 2026-01-22 14:14:58.810 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:14:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:59.438+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:14:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:14:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:14:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:14:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:14:59 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:00 compute-2 nova_compute[226433]: 2026-01-22 14:15:00.098 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:00.398+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:00 compute-2 ceph-mon[77081]: pgmap v1408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:00 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:00 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:01.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:01 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:02.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:02 compute-2 ceph-mon[77081]: pgmap v1409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:02 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:03.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:03.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:03 compute-2 nova_compute[226433]: 2026-01-22 14:15:03.812 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:03 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:04.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:04.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:04 compute-2 ceph-mon[77081]: pgmap v1410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:04 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:05 compute-2 nova_compute[226433]: 2026-01-22 14:15:05.101 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:05.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:06 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:06 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:06.343+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:06.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:07 compute-2 ceph-mon[77081]: pgmap v1411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:07 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:07.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:07.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:08.319+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:08 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:08 compute-2 nova_compute[226433]: 2026-01-22 14:15:08.853 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:09.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:09 compute-2 ceph-mon[77081]: pgmap v1412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:09 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:09.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:10 compute-2 nova_compute[226433]: 2026-01-22 14:15:10.105 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:10.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:10 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:10 compute-2 ceph-mon[77081]: pgmap v1413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:11.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:11 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:11 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:11 compute-2 sudo[240186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:11 compute-2 sudo[240186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:11 compute-2 sudo[240186]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:11 compute-2 sudo[240211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:11 compute-2 sudo[240211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:11 compute-2 sudo[240211]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:11.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:12.186+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:12 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:12 compute-2 ceph-mon[77081]: pgmap v1414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:13 compute-2 podman[240237]: 2026-01-22 14:15:13.052493455 +0000 UTC m=+0.111986531 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 14:15:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:13.187+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:13 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:13 compute-2 nova_compute[226433]: 2026-01-22 14:15:13.855 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:14.201+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:14 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:14 compute-2 ceph-mon[77081]: pgmap v1415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:14.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:15 compute-2 nova_compute[226433]: 2026-01-22 14:15:15.109 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:15.223+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:15 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:15.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:16.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:16 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:16 compute-2 ceph-mon[77081]: pgmap v1416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:16 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:16.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:17.232+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:17 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:17.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:18.191+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:18 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:18 compute-2 ceph-mon[77081]: pgmap v1417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:15:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1676732257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:15:18 compute-2 nova_compute[226433]: 2026-01-22 14:15:18.855 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:18.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:19.228+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:19 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:19 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:20 compute-2 nova_compute[226433]: 2026-01-22 14:15:20.112 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:20.198+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:20 compute-2 ceph-mon[77081]: pgmap v1418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:20 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:20 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:21.237+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:21.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:22 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:22.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:23.277+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:23 compute-2 ceph-mon[77081]: pgmap v1419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:23 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:23.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:23 compute-2 nova_compute[226433]: 2026-01-22 14:15:23.858 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:24.263+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:24 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:24 compute-2 ceph-mon[77081]: pgmap v1420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:25 compute-2 nova_compute[226433]: 2026-01-22 14:15:25.115 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:25.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:25.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:25 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:26.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-2 sudo[240270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:26 compute-2 sudo[240270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-2 sudo[240270]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:26 compute-2 sudo[240296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:15:26 compute-2 sudo[240296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-2 sudo[240296]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:26 compute-2 sudo[240321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:26 compute-2 sudo[240321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-2 sudo[240321]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:26 compute-2 sudo[240346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:15:26 compute-2 sudo[240346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:26 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-2 ceph-mon[77081]: pgmap v1421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:26 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:26 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:27.218+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:27 compute-2 sudo[240346]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:27.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:28 compute-2 podman[240403]: 2026-01-22 14:15:28.002566158 +0000 UTC m=+0.063296923 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:28 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:28 compute-2 ceph-mon[77081]: pgmap v1422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:15:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:15:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:28.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:28 compute-2 nova_compute[226433]: 2026-01-22 14:15:28.864 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:15:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7438 writes, 40K keys, 7438 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 7438 writes, 7438 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1865 writes, 9617 keys, 1865 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s
                                           Interval WAL: 1865 writes, 1865 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.6      0.50              0.14        22    0.023       0      0       0.0       0.0
                                             L6      1/0   10.10 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    130.7    109.8      1.66              0.53        21    0.079    135K    12K       0.0       0.0
                                            Sum      1/0   10.10 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    100.2    104.9      2.16              0.67        43    0.050    135K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    105.1    107.7      0.62              0.22        12    0.052     48K   4092       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    130.7    109.8      1.66              0.53        21    0.079    135K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     89.2      0.50              0.14        21    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.044, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.09 MB/s write, 0.21 GB read, 0.09 MB/s read, 2.2 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 23.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000359 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1254,22.65 MB,7.44971%) FilterBlock(43,388.92 KB,0.124936%) IndexBlock(43,580.58 KB,0.186504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:15:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:29.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:29.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:29 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:30 compute-2 nova_compute[226433]: 2026-01-22 14:15:30.118 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:30.168+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:31.215+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:31 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:31 compute-2 ceph-mon[77081]: pgmap v1423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:31.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:31 compute-2 sudo[240427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:31 compute-2 sudo[240427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:31 compute-2 sudo[240427]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:31 compute-2 sudo[240452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:31 compute-2 sudo[240452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:31 compute-2 sudo[240452]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:32.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:32 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:32 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:32 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:32 compute-2 ceph-mon[77081]: pgmap v1424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:32 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:33.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:33.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:33 compute-2 nova_compute[226433]: 2026-01-22 14:15:33.866 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:34 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:34.296+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:35 compute-2 ceph-mon[77081]: pgmap v1425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:35 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:35 compute-2 nova_compute[226433]: 2026-01-22 14:15:35.120 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:35.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:35 compute-2 nova_compute[226433]: 2026-01-22 14:15:35.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:35.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:36 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:36.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:37.228+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:37 compute-2 ceph-mon[77081]: pgmap v1426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:37 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:37 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:37.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:38 compute-2 sshd-session[240480]: Connection closed by authenticating user root 92.118.39.95 port 33246 [preauth]
Jan 22 14:15:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:38.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:38 compute-2 nova_compute[226433]: 2026-01-22 14:15:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:38 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:38 compute-2 ceph-mon[77081]: pgmap v1427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:38 compute-2 nova_compute[226433]: 2026-01-22 14:15:38.907 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:39.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:39 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:39 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:39.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:40 compute-2 nova_compute[226433]: 2026-01-22 14:15:40.121 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:40.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:40 compute-2 nova_compute[226433]: 2026-01-22 14:15:40.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:40 compute-2 ceph-mon[77081]: pgmap v1428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:40 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:41.230+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:41.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:41 compute-2 sudo[240484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:41 compute-2 sudo[240484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:41 compute-2 sudo[240484]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:42 compute-2 sudo[240509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:15:42 compute-2 sudo[240509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:42 compute-2 sudo[240509]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:42 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:42 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:15:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:42.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:42 compute-2 nova_compute[226433]: 2026-01-22 14:15:42.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:43.320+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.540 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.542 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.542 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.573 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.574 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.574 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.574 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.575 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:15:43 compute-2 ceph-mon[77081]: pgmap v1429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:43 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:43.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:43 compute-2 nova_compute[226433]: 2026-01-22 14:15:43.908 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:15:44 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3600868494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.020 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:15:44 compute-2 podman[240555]: 2026-01-22 14:15:44.060392457 +0000 UTC m=+0.110366558 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.197 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.198 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4798MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.198 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.199 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:15:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:44.369+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.607 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:15:44 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:44 compute-2 ceph-mon[77081]: pgmap v1430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3600868494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:44 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:44 compute-2 nova_compute[226433]: 2026-01-22 14:15:44.797 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:15:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.123 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:15:45 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3159148396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.218 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.226 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.357 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.361 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.362 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:15:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:45.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:15:45 compute-2 nova_compute[226433]: 2026-01-22 14:15:45.602 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:15:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1191107956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3159148396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:45 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:45.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:46.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:46 compute-2 ceph-mon[77081]: pgmap v1431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:46 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3975816590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:15:46 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:46 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:15:47.186 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:15:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:15:47.187 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:15:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:15:47.187 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:15:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:47.396+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:47 compute-2 nova_compute[226433]: 2026-01-22 14:15:47.603 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:47.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:47 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:48.369+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:48 compute-2 ceph-mon[77081]: pgmap v1432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:48 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:48 compute-2 nova_compute[226433]: 2026-01-22 14:15:48.961 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:49.355+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:49 compute-2 nova_compute[226433]: 2026-01-22 14:15:49.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:49 compute-2 nova_compute[226433]: 2026-01-22 14:15:49.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:15:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:49.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:50 compute-2 nova_compute[226433]: 2026-01-22 14:15:50.131 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:50 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:50.372+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:51.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:51.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:51 compute-2 ceph-mon[77081]: pgmap v1433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:51 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:51 compute-2 sudo[240610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:51 compute-2 sudo[240610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:51 compute-2 sudo[240610]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:52 compute-2 sudo[240635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:15:52 compute-2 sudo[240635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:15:52 compute-2 sudo[240635]: pam_unix(sudo:session): session closed for user root
Jan 22 14:15:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:52.396+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:52 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:52 compute-2 ceph-mon[77081]: pgmap v1434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:52 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:15:52 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:53.409+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:53 compute-2 nova_compute[226433]: 2026-01-22 14:15:53.962 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:54 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:54.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.132 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:55 compute-2 ceph-mon[77081]: pgmap v1435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:55 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:55.358+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.956 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.976 226437 WARNING nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] While synchronizing instance power states, found 3 instances in the database and 0 instances on the hypervisor.
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for e0e74330-96df-479f-8baf-53fbd2ccba91 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid f591d61b-712e-49aa-85bd-8d222b607eb3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid 87e798e6-6f00-4fe1-8412-75ddc9e2878e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "f591d61b-712e-49aa-85bd-8d222b607eb3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:15:55 compute-2 nova_compute[226433]: 2026-01-22 14:15:55.978 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:15:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:15:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:56.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:56 compute-2 nova_compute[226433]: 2026-01-22 14:15:56.532 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:15:56 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:56 compute-2 ceph-mon[77081]: pgmap v1436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:57.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:57.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:15:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:57.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:15:57 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:57 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:58.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:59 compute-2 nova_compute[226433]: 2026-01-22 14:15:59.003 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:15:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:15:59 compute-2 podman[240664]: 2026-01-22 14:15:59.026444663 +0000 UTC m=+0.091757364 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:15:59 compute-2 ceph-mon[77081]: pgmap v1437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:15:59 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 14:15:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:59.243+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:15:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:15:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:15:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:15:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:00 compute-2 nova_compute[226433]: 2026-01-22 14:16:00.176 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:00.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:00 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:16:00.411 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:16:00 compute-2 nova_compute[226433]: 2026-01-22 14:16:00.411 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:00 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:16:00.412 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:16:00 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:01.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:01 compute-2 ceph-mon[77081]: pgmap v1438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:01 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:01 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2347 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:02.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:03 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:03 compute-2 ceph-mon[77081]: pgmap v1439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:03 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:03.195+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:04 compute-2 nova_compute[226433]: 2026-01-22 14:16:04.007 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:04.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:04 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:05.172+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:05 compute-2 nova_compute[226433]: 2026-01-22 14:16:05.178 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:05 compute-2 ceph-mon[77081]: pgmap v1440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:05 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:06.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:07.195+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:07 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:07 compute-2 ceph-mon[77081]: pgmap v1441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:07 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:07 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:07 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:16:07.413 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:16:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:08.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:08 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:08 compute-2 ceph-mon[77081]: pgmap v1442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:09.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:09 compute-2 nova_compute[226433]: 2026-01-22 14:16:09.050 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:09.151+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:10.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:10 compute-2 nova_compute[226433]: 2026-01-22 14:16:10.217 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:10 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:11.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:11.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:11 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:11 compute-2 ceph-mon[77081]: pgmap v1443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.793839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371793875, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1487, "num_deletes": 251, "total_data_size": 2780669, "memory_usage": 2818296, "flush_reason": "Manual Compaction"}
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371823670, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 1150158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40065, "largest_seqno": 41547, "table_properties": {"data_size": 1145332, "index_size": 2030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14943, "raw_average_key_size": 21, "raw_value_size": 1133865, "raw_average_value_size": 1652, "num_data_blocks": 88, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091274, "oldest_key_time": 1769091274, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 29903 microseconds, and 3542 cpu microseconds.
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.823734) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 1150158 bytes OK
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.823760) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.826098) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.826123) EVENT_LOG_v1 {"time_micros": 1769091371826115, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.826147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2773549, prev total WAL file size 2773549, number of live WAL files 2.
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.827542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(1123KB)], [75(10MB)]
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371827577, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 11745995, "oldest_snapshot_seqno": -1}
Jan 22 14:16:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:11.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 8186 keys, 8509468 bytes, temperature: kUnknown
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371904584, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 8509468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8463046, "index_size": 24870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 218086, "raw_average_key_size": 26, "raw_value_size": 8321860, "raw_average_value_size": 1016, "num_data_blocks": 953, "num_entries": 8186, "num_filter_entries": 8186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:16:11 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.904843) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8509468 bytes
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.000100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.4 rd, 110.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.6) write-amplify(7.4) OK, records in: 8658, records dropped: 472 output_compression: NoCompression
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.000144) EVENT_LOG_v1 {"time_micros": 1769091372000127, "job": 46, "event": "compaction_finished", "compaction_time_micros": 77093, "compaction_time_cpu_micros": 21170, "output_level": 6, "num_output_files": 1, "total_output_size": 8509468, "num_input_records": 8658, "num_output_records": 8186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091372000720, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091372003600, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.827430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:16:12 compute-2 sudo[240689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:12 compute-2 sudo[240689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:12 compute-2 sudo[240689]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:12 compute-2 sudo[240714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:12 compute-2 sudo[240714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:12.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:12 compute-2 sudo[240714]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:13.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:13.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:13 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:13 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:13 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:13 compute-2 ceph-mon[77081]: pgmap v1444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:14 compute-2 nova_compute[226433]: 2026-01-22 14:16:14.052 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:14.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:14 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:14 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:14 compute-2 ceph-mon[77081]: pgmap v1445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:14 compute-2 podman[240740]: 2026-01-22 14:16:14.568077719 +0000 UTC m=+0.096385058 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:16:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:15.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:15 compute-2 nova_compute[226433]: 2026-01-22 14:16:15.220 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:15.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:15.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:16 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:16 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:16.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:17.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:17.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:17 compute-2 ceph-mon[77081]: pgmap v1446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:17 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:17 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2362 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:18.277+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:16:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:16:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:16:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:16:18 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:18 compute-2 ceph-mon[77081]: pgmap v1447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:18 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:16:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:16:19 compute-2 nova_compute[226433]: 2026-01-22 14:16:19.055 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:19.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:19.308+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:19.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:20 compute-2 nova_compute[226433]: 2026-01-22 14:16:20.222 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:20.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:20 compute-2 ceph-mon[77081]: pgmap v1448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:21.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:21.282+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:21 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2367 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:21.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:22.298+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:22 compute-2 ceph-mon[77081]: pgmap v1449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:22 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:23.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:23.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:24 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:24 compute-2 nova_compute[226433]: 2026-01-22 14:16:24.057 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:24.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:25.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:25 compute-2 nova_compute[226433]: 2026-01-22 14:16:25.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:25.298+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:25 compute-2 ceph-mon[77081]: pgmap v1450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:25 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:25.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:26.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:27.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:27 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:27.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:27.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:28.237+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:28 compute-2 ceph-mon[77081]: pgmap v1451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:28 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:28 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2372 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:28 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:29 compute-2 nova_compute[226433]: 2026-01-22 14:16:29.060 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:29.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:29.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:29.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:29 compute-2 ceph-mon[77081]: pgmap v1452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:29 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 14:16:29 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:29 compute-2 podman[240775]: 2026-01-22 14:16:29.990819136 +0000 UTC m=+0.056871366 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 14:16:30 compute-2 nova_compute[226433]: 2026-01-22 14:16:30.225 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:30.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:31 compute-2 ceph-mon[77081]: pgmap v1453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:31 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:31.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:16:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Cumulative writes: 6977 writes, 27K keys, 6977 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6977 writes, 1551 syncs, 4.50 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1066 writes, 3434 keys, 1066 commit groups, 1.0 writes per commit group, ingest: 3.16 MB, 0.01 MB/s
                                           Interval WAL: 1066 writes, 439 syncs, 2.43 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:16:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:31.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:31.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:32 compute-2 sudo[240795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:32 compute-2 sudo[240795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:32 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:32 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2377 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:32 compute-2 sudo[240795]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:32.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:32 compute-2 sudo[240820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:32 compute-2 sudo[240820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:32 compute-2 sudo[240820]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:33.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:33.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:33 compute-2 ceph-mon[77081]: pgmap v1454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:33 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:34 compute-2 nova_compute[226433]: 2026-01-22 14:16:34.061 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:34.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:34 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:34 compute-2 ceph-mon[77081]: pgmap v1455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:35.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:35 compute-2 nova_compute[226433]: 2026-01-22 14:16:35.227 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:35.303+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:35 compute-2 nova_compute[226433]: 2026-01-22 14:16:35.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:36 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:36 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:36.305+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:37.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:37.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:37 compute-2 nova_compute[226433]: 2026-01-22 14:16:37.288 226437 DEBUG oslo_concurrency.lockutils [None req-dec0213c-d0ec-412c-9228-b640587c2a19 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "f591d61b-712e-49aa-85bd-8d222b607eb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:16:37 compute-2 ceph-mon[77081]: pgmap v1456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:37 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:37 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2387 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:37.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:38.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:38 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:38 compute-2 ceph-mon[77081]: pgmap v1457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:38 compute-2 ovn_controller[133156]: 2026-01-22T14:16:38Z|00043|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 14:16:39 compute-2 nova_compute[226433]: 2026-01-22 14:16:39.063 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:39.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:39.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:39.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:39 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:40 compute-2 nova_compute[226433]: 2026-01-22 14:16:40.231 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:40.281+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:40 compute-2 nova_compute[226433]: 2026-01-22 14:16:40.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:40 compute-2 nova_compute[226433]: 2026-01-22 14:16:40.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:41.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:41 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:41 compute-2 ceph-mon[77081]: pgmap v1458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:41 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:41.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:41.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:42 compute-2 sudo[240850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:42 compute-2 sudo[240850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-2 sudo[240850]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-2 sudo[240875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:16:42 compute-2 sudo[240875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-2 sudo[240875]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:42.244+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:42 compute-2 sudo[240900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:42 compute-2 sudo[240900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-2 sudo[240900]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:42 compute-2 sudo[240925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:16:42 compute-2 sudo[240925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:42 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:42 compute-2 sudo[240925]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:43.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:43 compute-2 ceph-mon[77081]: pgmap v1459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:43 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2392 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:43 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:43.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.065 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:44.277+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.536 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:16:44 compute-2 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:16:44 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:44 compute-2 ceph-mon[77081]: pgmap v1460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:16:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:16:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:16:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:16:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:16:44 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:16:45 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3043854192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:45 compute-2 podman[241004]: 2026-01-22 14:16:45.041159018 +0000 UTC m=+0.094523443 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.053 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:16:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:45.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.223 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.225 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4799MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.225 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.225 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:16:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:45.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.290 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.340 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.340 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.340 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.341 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.341 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.355 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.369 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.369 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.384 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.406 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.495 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:16:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:16:45 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1336531881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.929 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.935 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.953 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.976 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:16:45 compute-2 nova_compute[226433]: 2026-01-22 14:16:45.976 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:16:46 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3043854192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:46 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:46 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1336531881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:46.303+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:46 compute-2 nova_compute[226433]: 2026-01-22 14:16:46.978 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:46 compute-2 nova_compute[226433]: 2026-01-22 14:16:46.978 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:16:46 compute-2 nova_compute[226433]: 2026-01-22 14:16:46.978 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:16:47 compute-2 nova_compute[226433]: 2026-01-22 14:16:47.009 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:16:47 compute-2 nova_compute[226433]: 2026-01-22 14:16:47.009 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:16:47 compute-2 nova_compute[226433]: 2026-01-22 14:16:47.010 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:16:47 compute-2 nova_compute[226433]: 2026-01-22 14:16:47.010 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:16:47 compute-2 nova_compute[226433]: 2026-01-22 14:16:47.010 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:47 compute-2 ceph-mon[77081]: pgmap v1461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:47 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/353905065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:16:47.187 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:16:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:16:47.188 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:16:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:16:47.188 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:16:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:47.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:47 compute-2 nova_compute[226433]: 2026-01-22 14:16:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:16:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:48 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/336408558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:16:48 compute-2 sshd-session[241057]: Invalid user banxgg from 45.148.10.240 port 32836
Jan 22 14:16:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:48.266+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:48 compute-2 sshd-session[241057]: Connection closed by invalid user banxgg 45.148.10.240 port 32836 [preauth]
Jan 22 14:16:49 compute-2 nova_compute[226433]: 2026-01-22 14:16:49.068 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:49 compute-2 ceph-mon[77081]: pgmap v1462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:49 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:49.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:49.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:50.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:50 compute-2 nova_compute[226433]: 2026-01-22 14:16:50.292 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:50 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:51.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:51 compute-2 ceph-mon[77081]: pgmap v1463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:51 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:16:51 compute-2 nova_compute[226433]: 2026-01-22 14:16:51.410 226437 DEBUG oslo_concurrency.processutils [None req-aeaaeb78-1155-4f77-81df-46e2a650d614 cfca93e323f848dba5ea3f5880bb9071 12769453a3af4b8eb7d8ff7daaaaa7ad - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:16:51 compute-2 nova_compute[226433]: 2026-01-22 14:16:51.446 226437 DEBUG oslo_concurrency.processutils [None req-aeaaeb78-1155-4f77-81df-46e2a650d614 cfca93e323f848dba5ea3f5880bb9071 12769453a3af4b8eb7d8ff7daaaaa7ad - - default default] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:16:51 compute-2 sudo[241062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:51 compute-2 sudo[241062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:51 compute-2 sudo[241062]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:51 compute-2 sudo[241087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:16:51 compute-2 sudo[241087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:51 compute-2 sudo[241087]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:51.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:52.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:52 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:52 compute-2 ceph-mon[77081]: pgmap v1464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:52 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:52 compute-2 sudo[241112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:52 compute-2 sudo[241112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:52 compute-2 sudo[241112]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:52 compute-2 sudo[241137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:16:52 compute-2 sudo[241137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:16:52 compute-2 sudo[241137]: pam_unix(sudo:session): session closed for user root
Jan 22 14:16:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:53.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:53 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:16:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:16:54 compute-2 nova_compute[226433]: 2026-01-22 14:16:54.070 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:54.243+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:54 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:54 compute-2 ceph-mon[77081]: pgmap v1465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:55.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:55 compute-2 nova_compute[226433]: 2026-01-22 14:16:55.294 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:55 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:16:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:56.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:56 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:56 compute-2 ceph-mon[77081]: pgmap v1466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:57.244+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:57 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:57 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:16:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:16:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:16:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:58.219+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:59 compute-2 nova_compute[226433]: 2026-01-22 14:16:59.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:16:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:16:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:59.201+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:16:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:59 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:59 compute-2 ceph-mon[77081]: pgmap v1467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:16:59 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:16:59 compute-2 nova_compute[226433]: 2026-01-22 14:16:59.572 226437 DEBUG oslo_concurrency.lockutils [None req-46113aab-392c-4b18-81d5-e2b8818c573a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:16:59 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 14:16:59 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 14:16:59 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 14:16:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:16:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:16:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:00.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:00 compute-2 nova_compute[226433]: 2026-01-22 14:17:00.296 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:00 compute-2 podman[241167]: 2026-01-22 14:17:00.984448461 +0000 UTC m=+0.049079205 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:17:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:17:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:01.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:17:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:01.244+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:02.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:02 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:02 compute-2 ceph-mon[77081]: pgmap v1468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 22 14:17:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:03.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:03.248+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:03.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:04 compute-2 nova_compute[226433]: 2026-01-22 14:17:04.073 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:04.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:04 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:04 compute-2 ceph-mon[77081]: pgmap v1469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 14:17:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:05.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:17:05 compute-2 nova_compute[226433]: 2026-01-22 14:17:05.296 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:05 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:05 compute-2 ceph-mon[77081]: pgmap v1470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 22 14:17:05 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:17:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:17:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:06.234+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:06 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:17:06 compute-2 ceph-mon[77081]: pgmap v1471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 14:17:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:07.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:08 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:08 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:08.215+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:09 compute-2 nova_compute[226433]: 2026-01-22 14:17:09.076 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:17:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:17:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:09.174+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:17:09.328 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:17:09 compute-2 nova_compute[226433]: 2026-01-22 14:17:09.328 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:17:09.329 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:17:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:17:09.330 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:17:09 compute-2 ceph-mon[77081]: pgmap v1472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 22 14:17:09 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:10.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:10 compute-2 nova_compute[226433]: 2026-01-22 14:17:10.298 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:10 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:10 compute-2 ceph-mon[77081]: pgmap v1473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Jan 22 14:17:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:11.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:11.237+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:11 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:11 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:17:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:17:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:12.197+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:12 compute-2 sudo[241191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:12 compute-2 sudo[241191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:12 compute-2 sudo[241191]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:12 compute-2 sudo[241216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:12 compute-2 sudo[241216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:12 compute-2 sudo[241216]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:12 compute-2 ceph-mon[77081]: pgmap v1474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Jan 22 14:17:12 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:12 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:13.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:13.171+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:13 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:13.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:14 compute-2 nova_compute[226433]: 2026-01-22 14:17:14.079 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:14.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:14 compute-2 ceph-mon[77081]: pgmap v1475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 345 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 88 op/s
Jan 22 14:17:14 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:15.213+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:15 compute-2 nova_compute[226433]: 2026-01-22 14:17:15.299 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:16 compute-2 podman[241243]: 2026-01-22 14:17:16.069261186 +0000 UTC m=+0.128293043 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:17:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:16.166+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:16 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:17.192+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:17 compute-2 ceph-mon[77081]: pgmap v1476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 114 op/s
Jan 22 14:17:17 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:17 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:18.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:18 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:18 compute-2 ceph-mon[77081]: pgmap v1477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:17:18 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2171679207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:17:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2171679207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:17:19 compute-2 nova_compute[226433]: 2026-01-22 14:17:19.081 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:19.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:19 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:19.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:20.153+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:20 compute-2 nova_compute[226433]: 2026-01-22 14:17:20.301 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:21 compute-2 ceph-mon[77081]: pgmap v1478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:17:21 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:21.183+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:22.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:22 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:23.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:23 compute-2 ceph-mon[77081]: pgmap v1479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 93 op/s
Jan 22 14:17:23 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:23 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:24 compute-2 nova_compute[226433]: 2026-01-22 14:17:24.083 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:24.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:24 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:24 compute-2 ceph-mon[77081]: pgmap v1480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Jan 22 14:17:24 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:25.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:25.219+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:25 compute-2 nova_compute[226433]: 2026-01-22 14:17:25.302 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:26.227+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:27 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:17:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:17:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:27.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:27.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:28 compute-2 ceph-mon[77081]: pgmap v1481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Jan 22 14:17:28 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:28 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:28.191+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:29 compute-2 ceph-mon[77081]: pgmap v1482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:29 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:29 compute-2 nova_compute[226433]: 2026-01-22 14:17:29.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:29.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:30.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:30 compute-2 nova_compute[226433]: 2026-01-22 14:17:30.304 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:30 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:31.176+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:31 compute-2 ceph-mon[77081]: pgmap v1483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:31 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:17:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:31.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:17:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:32 compute-2 podman[241278]: 2026-01-22 14:17:32.010666418 +0000 UTC m=+0.062710201 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 14:17:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:32.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:32 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:32 compute-2 ceph-mon[77081]: pgmap v1484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:32 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:32 compute-2 sudo[241299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:32 compute-2 sudo[241299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:32 compute-2 sudo[241299]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:32 compute-2 sudo[241324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:32 compute-2 sudo[241324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:32 compute-2 sudo[241324]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:33.224+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:34 compute-2 nova_compute[226433]: 2026-01-22 14:17:34.087 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:34 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:34.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.310665) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454310701, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1246, "num_deletes": 251, "total_data_size": 2384903, "memory_usage": 2420368, "flush_reason": "Manual Compaction"}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454439453, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1568060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41552, "largest_seqno": 42793, "table_properties": {"data_size": 1562724, "index_size": 2604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13558, "raw_average_key_size": 21, "raw_value_size": 1551283, "raw_average_value_size": 2405, "num_data_blocks": 111, "num_entries": 645, "num_filter_entries": 645, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091372, "oldest_key_time": 1769091372, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 128822 microseconds, and 4550 cpu microseconds.
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.439483) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1568060 bytes OK
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.439503) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.444651) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.444676) EVENT_LOG_v1 {"time_micros": 1769091454444668, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.444697) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2378738, prev total WAL file size 2378738, number of live WAL files 2.
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.446243) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1531KB)], [78(8310KB)]
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454446275, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 10077528, "oldest_snapshot_seqno": -1}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 8314 keys, 8450274 bytes, temperature: kUnknown
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454539383, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 8450274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403151, "index_size": 25251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 222074, "raw_average_key_size": 26, "raw_value_size": 8259695, "raw_average_value_size": 993, "num_data_blocks": 963, "num_entries": 8314, "num_filter_entries": 8314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.539606) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8450274 bytes
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.564833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.2 rd, 90.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 8831, records dropped: 517 output_compression: NoCompression
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.564882) EVENT_LOG_v1 {"time_micros": 1769091454564864, "job": 48, "event": "compaction_finished", "compaction_time_micros": 93175, "compaction_time_cpu_micros": 20875, "output_level": 6, "num_output_files": 1, "total_output_size": 8450274, "num_input_records": 8831, "num_output_records": 8314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454565633, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454568210, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.446163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:17:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:35.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:35 compute-2 nova_compute[226433]: 2026-01-22 14:17:35.307 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:35 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:35 compute-2 ceph-mon[77081]: pgmap v1485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:35 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:35.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:36.148+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:36 compute-2 nova_compute[226433]: 2026-01-22 14:17:36.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:36 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:36 compute-2 ceph-mon[77081]: pgmap v1486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:17:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:17:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:37.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:37 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:37 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:37.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:38.232+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:39 compute-2 nova_compute[226433]: 2026-01-22 14:17:39.125 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:39.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:39 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:39 compute-2 ceph-mon[77081]: pgmap v1487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:39 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:40.182+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:40 compute-2 nova_compute[226433]: 2026-01-22 14:17:40.344 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:40 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:40 compute-2 ceph-mon[77081]: pgmap v1488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:41.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:41 compute-2 nova_compute[226433]: 2026-01-22 14:17:41.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:41.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:42.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:42 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:42 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:42 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:42 compute-2 nova_compute[226433]: 2026-01-22 14:17:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:43.136+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:43 compute-2 ceph-mon[77081]: pgmap v1489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:43 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:43.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:44 compute-2 nova_compute[226433]: 2026-01-22 14:17:44.127 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:44.154+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:44 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:44 compute-2 ceph-mon[77081]: pgmap v1490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:45.112+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:45 compute-2 nova_compute[226433]: 2026-01-22 14:17:45.346 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:45 compute-2 nova_compute[226433]: 2026-01-22 14:17:45.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:45 compute-2 nova_compute[226433]: 2026-01-22 14:17:45.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:45 compute-2 nova_compute[226433]: 2026-01-22 14:17:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:17:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:45 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:46.136+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.589 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.590 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.590 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.591 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.592 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.592 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.655 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.655 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.655 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.656 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:17:46 compute-2 nova_compute[226433]: 2026-01-22 14:17:46.656 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:17:47 compute-2 podman[241367]: 2026-01-22 14:17:47.011484854 +0000 UTC m=+0.076824517 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 14:17:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:47.177+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:17:47.189 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:17:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:17:47.189 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:17:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:17:47.190 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:17:47 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:47 compute-2 ceph-mon[77081]: pgmap v1491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:47 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3921556699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:47 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:17:47 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/490131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:47 compute-2 nova_compute[226433]: 2026-01-22 14:17:47.322 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:17:47 compute-2 nova_compute[226433]: 2026-01-22 14:17:47.485 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:17:47 compute-2 nova_compute[226433]: 2026-01-22 14:17:47.487 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4796MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:17:47 compute-2 nova_compute[226433]: 2026-01-22 14:17:47.487 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:17:47 compute-2 nova_compute[226433]: 2026-01-22 14:17:47.487 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:17:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:47.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:48 compute-2 nova_compute[226433]: 2026-01-22 14:17:48.006 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:17:48 compute-2 nova_compute[226433]: 2026-01-22 14:17:48.006 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:17:48 compute-2 nova_compute[226433]: 2026-01-22 14:17:48.007 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:17:48 compute-2 nova_compute[226433]: 2026-01-22 14:17:48.007 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:17:48 compute-2 nova_compute[226433]: 2026-01-22 14:17:48.007 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:17:48 compute-2 nova_compute[226433]: 2026-01-22 14:17:48.093 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:17:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:48.179+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:48 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/490131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3359774298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:17:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1398042159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:49 compute-2 nova_compute[226433]: 2026-01-22 14:17:49.013 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.921s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:17:49 compute-2 nova_compute[226433]: 2026-01-22 14:17:49.023 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:17:49 compute-2 nova_compute[226433]: 2026-01-22 14:17:49.043 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:17:49 compute-2 nova_compute[226433]: 2026-01-22 14:17:49.066 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:17:49 compute-2 nova_compute[226433]: 2026-01-22 14:17:49.067 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:17:49 compute-2 nova_compute[226433]: 2026-01-22 14:17:49.128 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:49.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:17:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:17:49 compute-2 ceph-mon[77081]: pgmap v1492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:49 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1398042159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:17:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:49.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:50.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:50 compute-2 nova_compute[226433]: 2026-01-22 14:17:50.348 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:51 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:51 compute-2 ceph-mon[77081]: pgmap v1493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:51.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:51.256+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:51 compute-2 sudo[241429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:51 compute-2 sudo[241429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-2 sudo[241429]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:51 compute-2 sudo[241454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:17:51 compute-2 sudo[241454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-2 sudo[241454]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:51 compute-2 sudo[241479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:51 compute-2 sudo[241479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-2 sudo[241479]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:51 compute-2 sudo[241504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:17:51 compute-2 sudo[241504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:51 compute-2 nova_compute[226433]: 2026-01-22 14:17:51.992 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:17:52 compute-2 sudo[241504]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:52.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:52 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:52 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:52 compute-2 sudo[241551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:52 compute-2 sudo[241551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:52 compute-2 sudo[241551]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-2 sudo[241576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:17:52 compute-2 sudo[241576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:52 compute-2 sudo[241576]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-2 sudo[241584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:52 compute-2 sudo[241584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:52 compute-2 sudo[241584]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:52 compute-2 sudo[241624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:52 compute-2 sudo[241624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:52 compute-2 sudo[241624]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-2 sudo[241632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:52 compute-2 sudo[241632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:52 compute-2 sudo[241632]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:52 compute-2 sudo[241674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:17:52 compute-2 sudo[241674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:53.203+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:53 compute-2 ceph-mon[77081]: pgmap v1494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:53 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:17:53 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:17:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:17:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:17:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:17:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:54 compute-2 nova_compute[226433]: 2026-01-22 14:17:54.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:54 compute-2 podman[241774]: 2026-01-22 14:17:54.183233478 +0000 UTC m=+0.732156303 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 14:17:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:54.190+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:54 compute-2 podman[241774]: 2026-01-22 14:17:54.612351189 +0000 UTC m=+1.161274014 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:17:55 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:55 compute-2 ceph-mon[77081]: pgmap v1495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:55 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:55.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:55 compute-2 nova_compute[226433]: 2026-01-22 14:17:55.349 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:56.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:56 compute-2 podman[241928]: 2026-01-22 14:17:56.233453069 +0000 UTC m=+0.651884145 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:17:56 compute-2 sshd-session[241941]: Connection closed by authenticating user root 92.118.39.95 port 40474 [preauth]
Jan 22 14:17:56 compute-2 podman[241950]: 2026-01-22 14:17:56.536529582 +0000 UTC m=+0.284111319 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:17:56 compute-2 podman[241928]: 2026-01-22 14:17:56.565203095 +0000 UTC m=+0.983634161 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:17:56 compute-2 podman[241996]: 2026-01-22 14:17:56.821544004 +0000 UTC m=+0.078704108 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793)
Jan 22 14:17:56 compute-2 podman[242016]: 2026-01-22 14:17:56.889527655 +0000 UTC m=+0.051906914 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 14:17:56 compute-2 podman[241996]: 2026-01-22 14:17:56.914172981 +0000 UTC m=+0.171333055 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived)
Jan 22 14:17:57 compute-2 sudo[241674]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:57 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:57.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:17:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:57.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:58.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:58 compute-2 ceph-mon[77081]: pgmap v1496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:17:58 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:58 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:59 compute-2 nova_compute[226433]: 2026-01-22 14:17:59.132 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:17:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:59.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:59.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:17:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:17:59 compute-2 sudo[242030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:59 compute-2 sudo[242030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:59 compute-2 sudo[242030]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:59 compute-2 sudo[242055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:17:59 compute-2 sudo[242055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:59 compute-2 sudo[242055]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:59 compute-2 sudo[242080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:17:59 compute-2 sudo[242080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:59 compute-2 sudo[242080]: pam_unix(sudo:session): session closed for user root
Jan 22 14:17:59 compute-2 sudo[242105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:17:59 compute-2 sudo[242105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:17:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:17:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:17:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:17:59 compute-2 sudo[242105]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:00 compute-2 ceph-mon[77081]: pgmap v1497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:00 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:00 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:00.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:00 compute-2 nova_compute[226433]: 2026-01-22 14:18:00.352 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:01.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:01.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:01 compute-2 ceph-mon[77081]: pgmap v1498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:18:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:18:01 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:18:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:18:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:18:01 compute-2 nova_compute[226433]: 2026-01-22 14:18:01.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:02.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:18:02 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:18:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3882272731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:02 compute-2 ceph-mon[77081]: pgmap v1499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 253 MiB data, 353 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:02 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:03 compute-2 podman[242163]: 2026-01-22 14:18:03.002371192 +0000 UTC m=+0.058832558 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 14:18:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:03.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:03.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:04 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:18:04 compute-2 nova_compute[226433]: 2026-01-22 14:18:04.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:04.309+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:05.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:05.300+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:05 compute-2 nova_compute[226433]: 2026-01-22 14:18:05.354 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:05 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:05 compute-2 ceph-mon[77081]: pgmap v1500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 272 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 579 KiB/s wr, 11 op/s
Jan 22 14:18:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:05.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:06.336+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:07.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:07 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:07 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:07 compute-2 ceph-mon[77081]: pgmap v1501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:07.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:08.330+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:08 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:08 compute-2 ceph-mon[77081]: pgmap v1502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:09 compute-2 nova_compute[226433]: 2026-01-22 14:18:09.135 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:09.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:09 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:09.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:10.263+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:10 compute-2 nova_compute[226433]: 2026-01-22 14:18:10.356 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:10 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:10 compute-2 ceph-mon[77081]: pgmap v1503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:10 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:11.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:11.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:11 compute-2 sudo[242185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:11 compute-2 sudo[242185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:11 compute-2 sudo[242185]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.863243) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491863267, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 728, "num_deletes": 255, "total_data_size": 1208474, "memory_usage": 1230024, "flush_reason": "Manual Compaction"}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491869603, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 784551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42799, "largest_seqno": 43521, "table_properties": {"data_size": 780920, "index_size": 1411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9036, "raw_average_key_size": 19, "raw_value_size": 773296, "raw_average_value_size": 1703, "num_data_blocks": 61, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091454, "oldest_key_time": 1769091454, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 6448 microseconds, and 2653 cpu microseconds.
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.869675) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 784551 bytes OK
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.869708) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871767) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871783) EVENT_LOG_v1 {"time_micros": 1769091491871778, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1204419, prev total WAL file size 1204419, number of live WAL files 2.
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(766KB)], [81(8252KB)]
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491872539, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 9234825, "oldest_snapshot_seqno": -1}
Jan 22 14:18:11 compute-2 sudo[242210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:18:11 compute-2 sudo[242210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:11 compute-2 sudo[242210]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 8243 keys, 9067574 bytes, temperature: kUnknown
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491940135, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 9067574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9020058, "index_size": 25836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 221857, "raw_average_key_size": 26, "raw_value_size": 8876929, "raw_average_value_size": 1076, "num_data_blocks": 985, "num_entries": 8243, "num_filter_entries": 8243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.940504) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9067574 bytes
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.942702) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.5 rd, 134.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 8768, records dropped: 525 output_compression: NoCompression
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.942717) EVENT_LOG_v1 {"time_micros": 1769091491942710, "job": 50, "event": "compaction_finished", "compaction_time_micros": 67674, "compaction_time_cpu_micros": 28682, "output_level": 6, "num_output_files": 1, "total_output_size": 9067574, "num_input_records": 8768, "num_output_records": 8243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491942964, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491944415, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:18:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:11.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:12.229+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:18:12 compute-2 ceph-mon[77081]: pgmap v1504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:12 compute-2 sudo[242236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:12 compute-2 sudo[242236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:12 compute-2 sudo[242236]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:13 compute-2 sudo[242261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:13 compute-2 sudo[242261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:13 compute-2 sudo[242261]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:13.199+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:13.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:13 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:13 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:13.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:14 compute-2 nova_compute[226433]: 2026-01-22 14:18:14.137 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:14.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:14 compute-2 ceph-mon[77081]: pgmap v1505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 15 op/s
Jan 22 14:18:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:15.226+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:15 compute-2 nova_compute[226433]: 2026-01-22 14:18:15.359 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:15 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:15.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:16.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:16 compute-2 ceph-mon[77081]: pgmap v1506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 1.2 MiB/s wr, 4 op/s
Jan 22 14:18:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:17.254+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:17 compute-2 ovn_controller[133156]: 2026-01-22T14:18:17Z|00044|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 22 14:18:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:17.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:18 compute-2 podman[242288]: 2026-01-22 14:18:18.017221535 +0000 UTC m=+0.074138941 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:18:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:18:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:18:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:18:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:18:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:18.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:19 compute-2 ceph-mon[77081]: pgmap v1507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:18:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:18:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:19 compute-2 nova_compute[226433]: 2026-01-22 14:18:19.139 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:19.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:19.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:19.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:20.247+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:20 compute-2 nova_compute[226433]: 2026-01-22 14:18:20.362 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:20 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:21.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:21.264+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:21 compute-2 ceph-mon[77081]: pgmap v1508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:18:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:18:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:22.239+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:22 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:22 compute-2 ceph-mon[77081]: pgmap v1509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:22 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:23.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:23.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:23.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:24 compute-2 nova_compute[226433]: 2026-01-22 14:18:24.181 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:24.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:24 compute-2 ceph-mon[77081]: pgmap v1510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:24 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:25.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:25.323+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:25 compute-2 nova_compute[226433]: 2026-01-22 14:18:25.366 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:26 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:26.343+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:27 compute-2 ceph-mon[77081]: pgmap v1511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:27 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:27.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:27.296+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:28.330+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:29 compute-2 nova_compute[226433]: 2026-01-22 14:18:29.184 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:29.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:29.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:30 compute-2 ceph-mon[77081]: pgmap v1512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:30 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:30.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:30 compute-2 nova_compute[226433]: 2026-01-22 14:18:30.368 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:31 compute-2 ceph-mon[77081]: pgmap v1513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:31.274+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:32.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:32 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:32 compute-2 ceph-mon[77081]: pgmap v1514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:33 compute-2 sudo[242322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:33 compute-2 sudo[242322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:33 compute-2 sudo[242322]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:33 compute-2 podman[242346]: 2026-01-22 14:18:33.199976698 +0000 UTC m=+0.049070218 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 14:18:33 compute-2 sudo[242353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:33 compute-2 sudo[242353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:33 compute-2 sudo[242353]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:33.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:33.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:18:33 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:34 compute-2 nova_compute[226433]: 2026-01-22 14:18:34.187 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:34.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:34 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:34 compute-2 ceph-mon[77081]: pgmap v1515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:34 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:35.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:35 compute-2 nova_compute[226433]: 2026-01-22 14:18:35.372 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:35.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:35 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:36.254+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:36 compute-2 ceph-mon[77081]: pgmap v1516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:36 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:37.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:37 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:38.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:38 compute-2 nova_compute[226433]: 2026-01-22 14:18:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:39 compute-2 ceph-mon[77081]: pgmap v1517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:39 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:39 compute-2 nova_compute[226433]: 2026-01-22 14:18:39.189 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:39.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:40 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:40.226+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:40 compute-2 nova_compute[226433]: 2026-01-22 14:18:40.374 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:41 compute-2 ceph-mon[77081]: pgmap v1518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:41 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:41.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:41 compute-2 nova_compute[226433]: 2026-01-22 14:18:41.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:41.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:42.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:42 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:42 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:42.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:43 compute-2 ceph-mon[77081]: pgmap v1519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:43 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:43.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:43 compute-2 nova_compute[226433]: 2026-01-22 14:18:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:44 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:44.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:44 compute-2 nova_compute[226433]: 2026-01-22 14:18:44.191 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:45.152+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:45 compute-2 ceph-mon[77081]: pgmap v1520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:45 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:45 compute-2 nova_compute[226433]: 2026-01-22 14:18:45.376 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:45 compute-2 nova_compute[226433]: 2026-01-22 14:18:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:45 compute-2 nova_compute[226433]: 2026-01-22 14:18:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:18:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:46.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:46.122+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:46 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:46 compute-2 ceph-mon[77081]: pgmap v1521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:46 compute-2 nova_compute[226433]: 2026-01-22 14:18:46.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:47.091+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:18:47.190 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:18:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:18:47.191 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:18:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:18:47.191 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:18:47 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:47 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:47 compute-2 nova_compute[226433]: 2026-01-22 14:18:47.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:48.050+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:48 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:48 compute-2 ceph-mon[77081]: pgmap v1522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.537 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.563 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.564 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.564 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.564 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:18:48 compute-2 nova_compute[226433]: 2026-01-22 14:18:48.565 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:18:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:18:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2271535365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.008 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:18:49 compute-2 podman[242419]: 2026-01-22 14:18:49.054136814 +0000 UTC m=+0.109588074 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 22 14:18:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:49.064+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.221 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.226 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.227 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4786MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.227 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.228 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.301 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.301 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.302 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.302 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.302 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:18:49 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1217922939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2271535365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.362 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:18:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:18:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3014372553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.796 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.802 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.824 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.826 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:18:49 compute-2 nova_compute[226433]: 2026-01-22 14:18:49.827 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:18:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:50.107+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:50 compute-2 nova_compute[226433]: 2026-01-22 14:18:50.378 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:50 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/545743632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:50 compute-2 ceph-mon[77081]: pgmap v1523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3014372553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:18:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:51.142+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:51 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:18:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:18:51 compute-2 nova_compute[226433]: 2026-01-22 14:18:51.806 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:18:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:52.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:52.157+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:52 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:52 compute-2 ceph-mon[77081]: pgmap v1524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:53.115+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:53 compute-2 sudo[242469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:53 compute-2 sudo[242469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:53 compute-2 sudo[242469]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:53 compute-2 sudo[242494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:18:53 compute-2 sudo[242494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:18:53 compute-2 sudo[242494]: pam_unix(sudo:session): session closed for user root
Jan 22 14:18:53 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:53 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:18:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:54.088+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:54 compute-2 nova_compute[226433]: 2026-01-22 14:18:54.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:54 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:54 compute-2 ceph-mon[77081]: pgmap v1525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:55.073+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:55 compute-2 nova_compute[226433]: 2026-01-22 14:18:55.380 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:55 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:18:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:56.109+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:56 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:56 compute-2 ceph-mon[77081]: pgmap v1526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:57.064+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:57 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:18:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:18:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:18:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:58.104+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:58 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:58 compute-2 ceph-mon[77081]: pgmap v1527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:18:58 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:59.099+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:18:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:59 compute-2 nova_compute[226433]: 2026-01-22 14:18:59.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:18:59 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:18:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:18:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:18:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:00.141+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:00 compute-2 nova_compute[226433]: 2026-01-22 14:19:00.383 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:00 compute-2 ceph-mon[77081]: pgmap v1528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:00 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:01.173+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:01 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:19:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:19:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:02.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:02 compute-2 ceph-mon[77081]: pgmap v1529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:02 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:02 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:02 compute-2 sshd-session[242523]: Invalid user banx from 45.148.10.240 port 35722
Jan 22 14:19:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:03 compute-2 sshd-session[242523]: Connection closed by invalid user banx 45.148.10.240 port 35722 [preauth]
Jan 22 14:19:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:03.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:03 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:03 compute-2 podman[242526]: 2026-01-22 14:19:03.988761566 +0000 UTC m=+0.052346527 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:19:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:04.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:04.212+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:04 compute-2 nova_compute[226433]: 2026-01-22 14:19:04.230 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:04 compute-2 ceph-mon[77081]: pgmap v1530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:04 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:05.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:05 compute-2 nova_compute[226433]: 2026-01-22 14:19:05.386 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 14:19:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 14:19:05 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:06.141+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:06 compute-2 ceph-mon[77081]: pgmap v1531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:06 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:07.155+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:07 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:07 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:08.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:08.139+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:08 compute-2 ceph-mon[77081]: pgmap v1532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:08 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:09.147+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:09 compute-2 nova_compute[226433]: 2026-01-22 14:19:09.232 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:19:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:09.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:19:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:10.186+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:10 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:10 compute-2 nova_compute[226433]: 2026-01-22 14:19:10.388 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:11 compute-2 ceph-mon[77081]: pgmap v1533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:11 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:11.197+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:11 compute-2 sudo[242550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:11 compute-2 sudo[242550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:11 compute-2 sudo[242550]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-2 sudo[242575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:19:12 compute-2 sudo[242575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:12 compute-2 sudo[242575]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:12 compute-2 sudo[242600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:12 compute-2 sudo[242600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:12 compute-2 sudo[242600]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-2 sudo[242625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:19:12 compute-2 sudo[242625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:12.199+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:12 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:12 compute-2 sudo[242625]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:13.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:13 compute-2 ceph-mon[77081]: pgmap v1534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:13 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:13 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:19:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:19:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:19:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:19:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:19:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:19:13 compute-2 sudo[242682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:13 compute-2 sudo[242682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:13 compute-2 sudo[242682]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:13 compute-2 sudo[242707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:13 compute-2 sudo[242707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:13 compute-2 sudo[242707]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:14.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:14 compute-2 nova_compute[226433]: 2026-01-22 14:19:14.260 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:14 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:14 compute-2 ceph-mon[77081]: pgmap v1535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:15.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:15 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:15 compute-2 nova_compute[226433]: 2026-01-22 14:19:15.390 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:15.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:16.134+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:16 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:16 compute-2 ceph-mon[77081]: pgmap v1536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:17.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:17 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:19:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:19:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:19:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:19:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:18.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:18 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:18 compute-2 ceph-mon[77081]: pgmap v1537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:19:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:19:19 compute-2 nova_compute[226433]: 2026-01-22 14:19:19.261 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:19.278+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:19 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:19:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:19:19 compute-2 sudo[242735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:19 compute-2 sudo[242735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:19 compute-2 sudo[242735]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:19 compute-2 sudo[242766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:19:19 compute-2 sudo[242766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:19 compute-2 sudo[242766]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:19 compute-2 podman[242759]: 2026-01-22 14:19:19.652253922 +0000 UTC m=+0.089792552 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 14:19:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:19.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:20.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:20 compute-2 nova_compute[226433]: 2026-01-22 14:19:20.391 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:20 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:20 compute-2 ceph-mon[77081]: pgmap v1538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:21.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:22 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:22.215+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:23 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:23 compute-2 ceph-mon[77081]: pgmap v1539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:23 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:23 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:23.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:24 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:24.192+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:24 compute-2 nova_compute[226433]: 2026-01-22 14:19:24.262 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:25 compute-2 ceph-mon[77081]: pgmap v1540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:25 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:25.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:25 compute-2 nova_compute[226433]: 2026-01-22 14:19:25.394 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:26 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:26.223+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:27 compute-2 ceph-mon[77081]: pgmap v1541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:27 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:27 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:27.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:28 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:28.192+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:29 compute-2 ceph-mon[77081]: pgmap v1542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:29 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:29.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:29 compute-2 nova_compute[226433]: 2026-01-22 14:19:29.265 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:29.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:30.166+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:30 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:30 compute-2 nova_compute[226433]: 2026-01-22 14:19:30.397 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:31.126+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:31 compute-2 ceph-mon[77081]: pgmap v1543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:31 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:32.111+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:33 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:33.159+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:33 compute-2 sudo[242819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:33 compute-2 sudo[242819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:33 compute-2 sudo[242819]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:33 compute-2 sudo[242844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:33 compute-2 sudo[242844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:33 compute-2 sudo[242844]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:34 compute-2 ceph-mon[77081]: pgmap v1544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:34 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:34 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:34 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:34.138+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:34 compute-2 nova_compute[226433]: 2026-01-22 14:19:34.265 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:35 compute-2 podman[242870]: 2026-01-22 14:19:35.039820296 +0000 UTC m=+0.087914781 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 14:19:35 compute-2 ceph-mon[77081]: pgmap v1545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:35 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:35.168+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:35 compute-2 nova_compute[226433]: 2026-01-22 14:19:35.400 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:36 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:36.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:37 compute-2 ceph-mon[77081]: pgmap v1546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:37 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:37.189+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:38 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:38.171+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:38 compute-2 nova_compute[226433]: 2026-01-22 14:19:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:39.162+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:39 compute-2 nova_compute[226433]: 2026-01-22 14:19:39.270 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:39 compute-2 ceph-mon[77081]: pgmap v1547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:39 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:19:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:19:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:40.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:40 compute-2 nova_compute[226433]: 2026-01-22 14:19:40.402 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:40 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:40 compute-2 ceph-mon[77081]: pgmap v1548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:41.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:41 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:42.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:42 compute-2 nova_compute[226433]: 2026-01-22 14:19:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:42 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:42 compute-2 ceph-mon[77081]: pgmap v1549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:42 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:43.177+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:43 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:19:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:19:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:44.209+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:44 compute-2 nova_compute[226433]: 2026-01-22 14:19:44.270 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:44 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:44 compute-2 ceph-mon[77081]: pgmap v1550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:45.183+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:45 compute-2 nova_compute[226433]: 2026-01-22 14:19:45.445 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:45 compute-2 nova_compute[226433]: 2026-01-22 14:19:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:45 compute-2 nova_compute[226433]: 2026-01-22 14:19:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:45 compute-2 nova_compute[226433]: 2026-01-22 14:19:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:19:45 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:46.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:46.141+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:46 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:46 compute-2 ceph-mon[77081]: pgmap v1551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:47.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:19:47.192 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:19:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:19:47.192 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:19:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:19:47.193 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:19:47 compute-2 nova_compute[226433]: 2026-01-22 14:19:47.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:47 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:47 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:48.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:48.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:48 compute-2 nova_compute[226433]: 2026-01-22 14:19:48.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:48 compute-2 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:19:48 compute-2 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:19:48 compute-2 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:19:48 compute-2 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:19:48 compute-2 nova_compute[226433]: 2026-01-22 14:19:48.542 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:19:48 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:48 compute-2 ceph-mon[77081]: pgmap v1552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:19:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/578781724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.022 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:19:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:49.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.231 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.232 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4774MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.232 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.233 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.272 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.319 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.320 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.320 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.320 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.321 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.387 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:19:49 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/578781724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:49 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:19:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1661653774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.817 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.823 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:19:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.849 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.851 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:19:49 compute-2 nova_compute[226433]: 2026-01-22 14:19:49.852 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:19:50 compute-2 podman[242940]: 2026-01-22 14:19:50.017633468 +0000 UTC m=+0.075666533 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 14:19:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:50.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:50.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.446 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:50 compute-2 ceph-mon[77081]: pgmap v1553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1661653774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:50 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.852 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.853 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.853 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:19:50 compute-2 nova_compute[226433]: 2026-01-22 14:19:50.911 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:51.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1314543867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:51 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2930567293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:19:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:52.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:52.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:52 compute-2 nova_compute[226433]: 2026-01-22 14:19:52.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:19:52 compute-2 ceph-mon[77081]: pgmap v1554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:52 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:52 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:53.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:53 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:53 compute-2 sudo[242970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:53 compute-2 sudo[242970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:53 compute-2 sudo[242970]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:53.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:53 compute-2 sudo[242995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:19:53 compute-2 sudo[242995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:19:53 compute-2 sudo[242995]: pam_unix(sudo:session): session closed for user root
Jan 22 14:19:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:54.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:54 compute-2 nova_compute[226433]: 2026-01-22 14:19:54.275 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:54 compute-2 ceph-mon[77081]: pgmap v1555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:54 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:55.229+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:55 compute-2 nova_compute[226433]: 2026-01-22 14:19:55.506 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:55 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:19:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:55.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:19:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:56.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:56 compute-2 ceph-mon[77081]: pgmap v1556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:56 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:57.230+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:19:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:58.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:19:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:58.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:58 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:19:58 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:59.164+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:19:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:59 compute-2 nova_compute[226433]: 2026-01-22 14:19:59.277 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:19:59 compute-2 ceph-mon[77081]: pgmap v1557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:19:59 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:19:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:19:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:19:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:00.183+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:00 compute-2 nova_compute[226433]: 2026-01-22 14:20:00.510 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:00 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:00 compute-2 ceph-mon[77081]: pgmap v1558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 14:20:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 14:20:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:01.163+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:01 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:01 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:02.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:02.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:02 compute-2 ceph-mon[77081]: pgmap v1559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:02 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:02 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:03.231+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:03 compute-2 nova_compute[226433]: 2026-01-22 14:20:03.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:04 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:04.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:04.239+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:04 compute-2 nova_compute[226433]: 2026-01-22 14:20:04.279 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:05 compute-2 ceph-mon[77081]: pgmap v1560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:05 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:05.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:05 compute-2 nova_compute[226433]: 2026-01-22 14:20:05.513 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:05 compute-2 podman[243026]: 2026-01-22 14:20:05.976243145 +0000 UTC m=+0.043040867 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:20:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:06.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:06 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:07.242+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:07 compute-2 ceph-mon[77081]: pgmap v1561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:07 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:07 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:08.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:09.243+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:09 compute-2 nova_compute[226433]: 2026-01-22 14:20:09.281 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:09 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:09 compute-2 ceph-mon[77081]: pgmap v1562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:09 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:10.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:10 compute-2 nova_compute[226433]: 2026-01-22 14:20:10.515 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:10 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:10 compute-2 ceph-mon[77081]: pgmap v1563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:11.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:11.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:12.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:12 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:12 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:12 compute-2 sshd-session[243049]: Connection closed by authenticating user root 92.118.39.95 port 47696 [preauth]
Jan 22 14:20:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:13.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:13 compute-2 ceph-mon[77081]: pgmap v1564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:13 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:13 compute-2 sudo[243052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:13 compute-2 sudo[243052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:13 compute-2 sudo[243052]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:14 compute-2 sudo[243077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:14 compute-2 sudo[243077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:14 compute-2 sudo[243077]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:20:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:20:14 compute-2 nova_compute[226433]: 2026-01-22 14:20:14.284 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:14.323+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:14 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:14 compute-2 ceph-mon[77081]: pgmap v1565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:15.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:15 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:15 compute-2 nova_compute[226433]: 2026-01-22 14:20:15.517 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:16.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:16 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:16 compute-2 ceph-mon[77081]: pgmap v1566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:17.260+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:17 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:17 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:18.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:18 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:18 compute-2 ceph-mon[77081]: pgmap v1567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3670343237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:20:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3670343237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:20:19 compute-2 nova_compute[226433]: 2026-01-22 14:20:19.287 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:19.335+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:19 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:19 compute-2 sudo[243105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:19 compute-2 sudo[243105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-2 sudo[243105]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:19 compute-2 sudo[243130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:20:19 compute-2 sudo[243130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-2 sudo[243130]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:19 compute-2 sudo[243155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:19 compute-2 sudo[243155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-2 sudo[243155]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:19 compute-2 sudo[243180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:20:19 compute-2 sudo[243180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:20 compute-2 sudo[243180]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:20.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:20 compute-2 nova_compute[226433]: 2026-01-22 14:20:20.519 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:20 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:20 compute-2 ceph-mon[77081]: pgmap v1568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:20:21 compute-2 podman[243238]: 2026-01-22 14:20:21.020926278 +0000 UTC m=+0.084924593 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 14:20:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:21.337+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:21 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:20:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:20:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:20:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:20:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:20:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:20:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:20:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:20:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:22.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:20:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:22.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:22 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:22 compute-2 ceph-mon[77081]: pgmap v1569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:23.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:23 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:23 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:23.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:20:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:24.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:20:24 compute-2 nova_compute[226433]: 2026-01-22 14:20:24.289 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:24.314+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:24 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:24 compute-2 ceph-mon[77081]: pgmap v1570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:25.331+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:25 compute-2 nova_compute[226433]: 2026-01-22 14:20:25.521 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:25 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:25.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:26.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:26 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:26 compute-2 ceph-mon[77081]: pgmap v1571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:27 compute-2 sudo[243268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:27 compute-2 sudo[243268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:27 compute-2 sudo[243268]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:27 compute-2 sudo[243293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:20:27 compute-2 sudo[243293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:27 compute-2 sudo[243293]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:27.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:27.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:27 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:20:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.915481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627915536, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2007, "num_deletes": 251, "total_data_size": 3802973, "memory_usage": 3860896, "flush_reason": "Manual Compaction"}
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627933649, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 2486047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43526, "largest_seqno": 45528, "table_properties": {"data_size": 2478446, "index_size": 4159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19604, "raw_average_key_size": 21, "raw_value_size": 2461643, "raw_average_value_size": 2664, "num_data_blocks": 180, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091492, "oldest_key_time": 1769091492, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 18237 microseconds, and 7175 cpu microseconds.
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.933711) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 2486047 bytes OK
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.933743) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936817) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936841) EVENT_LOG_v1 {"time_micros": 1769091627936835, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936862) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3793713, prev total WAL file size 3809451, number of live WAL files 2.
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.937999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(2427KB)], [84(8855KB)]
Jan 22 14:20:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627938050, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11553621, "oldest_snapshot_seqno": -1}
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 8652 keys, 9903653 bytes, temperature: kUnknown
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628012143, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9903653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9853112, "index_size": 27837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 231999, "raw_average_key_size": 26, "raw_value_size": 9702291, "raw_average_value_size": 1121, "num_data_blocks": 1064, "num_entries": 8652, "num_filter_entries": 8652, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.012552) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9903653 bytes
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.014192) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 9167, records dropped: 515 output_compression: NoCompression
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.014223) EVENT_LOG_v1 {"time_micros": 1769091628014208, "job": 52, "event": "compaction_finished", "compaction_time_micros": 74198, "compaction_time_cpu_micros": 23253, "output_level": 6, "num_output_files": 1, "total_output_size": 9903653, "num_input_records": 9167, "num_output_records": 8652, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628015356, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628018564, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.937907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:20:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:28.324+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:28 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:28 compute-2 ceph-mon[77081]: pgmap v1572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:28 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:29 compute-2 nova_compute[226433]: 2026-01-22 14:20:29.291 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:29.370+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:29.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:30.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:30 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:30 compute-2 nova_compute[226433]: 2026-01-22 14:20:30.524 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:31 compute-2 ceph-mon[77081]: pgmap v1573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:31 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:31.359+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:31.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:32 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:32 compute-2 ceph-mon[77081]: pgmap v1574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:32 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:32.377+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:33 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:33.331+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:33.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:34 compute-2 sudo[243321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:34 compute-2 sudo[243321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:34 compute-2 sudo[243321]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:34.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:34 compute-2 sudo[243346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:34 compute-2 sudo[243346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:34 compute-2 sudo[243346]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:34.306+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:34 compute-2 nova_compute[226433]: 2026-01-22 14:20:34.324 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:34 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:34 compute-2 ceph-mon[77081]: pgmap v1575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:35.335+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:35 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:35 compute-2 nova_compute[226433]: 2026-01-22 14:20:35.567 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:35.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:36 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:36 compute-2 ceph-mon[77081]: pgmap v1576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:36.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:36 compute-2 podman[243373]: 2026-01-22 14:20:36.986587124 +0000 UTC m=+0.049171975 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:20:37 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:37 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:37.414+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:37.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:20:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:20:38 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:38 compute-2 ceph-mon[77081]: pgmap v1577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:38.464+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:38 compute-2 nova_compute[226433]: 2026-01-22 14:20:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:39 compute-2 nova_compute[226433]: 2026-01-22 14:20:39.328 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:39 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:39.443+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:40.419+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:40 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:40 compute-2 ceph-mon[77081]: pgmap v1578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:40 compute-2 nova_compute[226433]: 2026-01-22 14:20:40.570 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:41.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:41 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:41.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:42.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:42.368+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:42 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:42 compute-2 ceph-mon[77081]: pgmap v1579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:42 compute-2 nova_compute[226433]: 2026-01-22 14:20:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:43.326+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:43 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:43 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:43.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:20:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:44.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:20:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:44.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:44 compute-2 nova_compute[226433]: 2026-01-22 14:20:44.327 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:44 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:44 compute-2 ceph-mon[77081]: pgmap v1580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:45.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:45 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:45 compute-2 nova_compute[226433]: 2026-01-22 14:20:45.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:45 compute-2 nova_compute[226433]: 2026-01-22 14:20:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:20:45 compute-2 nova_compute[226433]: 2026-01-22 14:20:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:45 compute-2 nova_compute[226433]: 2026-01-22 14:20:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:20:45 compute-2 nova_compute[226433]: 2026-01-22 14:20:45.573 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:45 compute-2 nova_compute[226433]: 2026-01-22 14:20:45.609 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:20:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:46.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:46.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:20:47.194 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:20:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:20:47.194 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:20:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:20:47.194 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:20:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:47.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:47 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:47 compute-2 ceph-mon[77081]: pgmap v1581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:47 compute-2 nova_compute[226433]: 2026-01-22 14:20:47.605 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:47 compute-2 nova_compute[226433]: 2026-01-22 14:20:47.606 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:48.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:48.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:48 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:48 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:48 compute-2 ceph-mon[77081]: pgmap v1582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:49.218+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.328 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.542 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:20:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:49.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:20:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3542653799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:49 compute-2 nova_compute[226433]: 2026-01-22 14:20:49.967 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:20:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:50.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.161 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.162 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4768MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.162 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.162 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:20:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:50.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.312 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.507 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.575 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:50 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:20:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/76593589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.926 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:20:50 compute-2 nova_compute[226433]: 2026-01-22 14:20:50.932 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:20:51 compute-2 nova_compute[226433]: 2026-01-22 14:20:51.045 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:20:51 compute-2 nova_compute[226433]: 2026-01-22 14:20:51.047 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:20:51 compute-2 nova_compute[226433]: 2026-01-22 14:20:51.047 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:20:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:51.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-2 ceph-mon[77081]: pgmap v1583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3542653799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:51 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/76593589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:51 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:52 compute-2 podman[243443]: 2026-01-22 14:20:52.011121816 +0000 UTC m=+0.074778897 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.048 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.048 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.048 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.077 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.077 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.078 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.078 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:20:52 compute-2 nova_compute[226433]: 2026-01-22 14:20:52.079 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:52.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:52.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:52 compute-2 ceph-mon[77081]: pgmap v1584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:52 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:52 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1954756605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:53.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:53 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:54.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:54 compute-2 sudo[243472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:54 compute-2 sudo[243472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:54 compute-2 sudo[243472]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:54 compute-2 sudo[243497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:20:54 compute-2 sudo[243497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:20:54 compute-2 sudo[243497]: pam_unix(sudo:session): session closed for user root
Jan 22 14:20:54 compute-2 nova_compute[226433]: 2026-01-22 14:20:54.330 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:54.333+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:54 compute-2 nova_compute[226433]: 2026-01-22 14:20:54.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:54 compute-2 ceph-mon[77081]: pgmap v1585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3394538097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:20:54 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:55.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:55 compute-2 nova_compute[226433]: 2026-01-22 14:20:55.578 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:55 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:20:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:56.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:20:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:56.334+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:56 compute-2 nova_compute[226433]: 2026-01-22 14:20:56.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:20:56 compute-2 ceph-mon[77081]: pgmap v1586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:56 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:57.345+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:57 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:20:57 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:20:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:58.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:58.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:58 compute-2 ceph-mon[77081]: pgmap v1587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:20:58 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:59 compute-2 nova_compute[226433]: 2026-01-22 14:20:59.332 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:20:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:59.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:20:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:20:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:20:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:20:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:20:59 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:00.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:00 compute-2 nova_compute[226433]: 2026-01-22 14:21:00.580 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:01 compute-2 ceph-mon[77081]: pgmap v1588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:01 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:01.473+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:01 compute-2 nova_compute[226433]: 2026-01-22 14:21:01.573 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:01 compute-2 nova_compute[226433]: 2026-01-22 14:21:01.574 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:21:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:02.519+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:02 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:03.557+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:03 compute-2 ceph-mon[77081]: pgmap v1589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:03 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:03 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:04.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:04 compute-2 nova_compute[226433]: 2026-01-22 14:21:04.335 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:04.597+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:04 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:04 compute-2 ceph-mon[77081]: pgmap v1590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:04 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:05.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:05 compute-2 nova_compute[226433]: 2026-01-22 14:21:05.581 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:06 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:06.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:06.628+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:07 compute-2 ceph-mon[77081]: pgmap v1591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:07 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:07.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 14:21:07 compute-2 podman[243529]: 2026-01-22 14:21:07.994214003 +0000 UTC m=+0.052626299 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:21:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:08 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:08 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:08.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:09 compute-2 ceph-mon[77081]: pgmap v1592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:09 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:09 compute-2 nova_compute[226433]: 2026-01-22 14:21:09.337 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:09.515+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:10.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:10 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:10 compute-2 ceph-mon[77081]: pgmap v1593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:10.539+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:10 compute-2 nova_compute[226433]: 2026-01-22 14:21:10.584 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:11 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:11.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:12.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:12 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:12 compute-2 ceph-mon[77081]: pgmap v1594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:12.498+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:13 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:13 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:13.536+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:13.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:14 compute-2 nova_compute[226433]: 2026-01-22 14:21:14.338 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:14 compute-2 sudo[243551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:14 compute-2 sudo[243551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:14 compute-2 sudo[243551]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:14 compute-2 sudo[243576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:14 compute-2 sudo[243576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:14 compute-2 sudo[243576]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:14 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:14 compute-2 ceph-mon[77081]: pgmap v1595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:14.488+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:15.445+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:15 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:15 compute-2 nova_compute[226433]: 2026-01-22 14:21:15.586 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:15.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:16.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:16 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:16 compute-2 ceph-mon[77081]: pgmap v1596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:17.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:17 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:18.307+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:18 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:18 compute-2 ceph-mon[77081]: pgmap v1597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4133897823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:21:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4133897823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:21:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:19.348+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:19 compute-2 nova_compute[226433]: 2026-01-22 14:21:19.358 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:19 compute-2 sshd-session[243604]: Connection closed by authenticating user root 45.148.10.240 port 52210 [preauth]
Jan 22 14:21:19 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:20.321+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:20 compute-2 nova_compute[226433]: 2026-01-22 14:21:20.589 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:21 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:21 compute-2 ceph-mon[77081]: pgmap v1598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:21.305+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:21:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:21:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:22.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:22 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:22 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:23 compute-2 podman[243608]: 2026-01-22 14:21:23.040292728 +0000 UTC m=+0.074684424 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:21:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:23.247+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:23 compute-2 ceph-mon[77081]: pgmap v1599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:23 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:23 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:24.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:24 compute-2 nova_compute[226433]: 2026-01-22 14:21:24.360 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:24 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:24 compute-2 ceph-mon[77081]: pgmap v1600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:25.301+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:25 compute-2 nova_compute[226433]: 2026-01-22 14:21:25.592 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:25 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:25.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:26.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:26 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:26 compute-2 ceph-mon[77081]: pgmap v1601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:27.267+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:27 compute-2 sudo[243636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:27 compute-2 sudo[243636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-2 sudo[243636]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-2 sudo[243661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:21:27 compute-2 sudo[243661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-2 sudo[243661]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-2 sudo[243686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:27 compute-2 sudo[243686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-2 sudo[243686]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-2 sudo[243711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:21:27 compute-2 sudo[243711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:27 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:27 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:27 compute-2 sudo[243711]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:27.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:28.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:29.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:29 compute-2 nova_compute[226433]: 2026-01-22 14:21:29.362 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:29 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:29 compute-2 ceph-mon[77081]: pgmap v1602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:21:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:21:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:29.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:30.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:30 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:30 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:30 compute-2 ceph-mon[77081]: pgmap v1603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:30.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:30 compute-2 nova_compute[226433]: 2026-01-22 14:21:30.595 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:31.265+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:31 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:32.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:32 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:32 compute-2 ceph-mon[77081]: pgmap v1604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:32.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:33.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:33 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:33 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:34.328+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:34 compute-2 nova_compute[226433]: 2026-01-22 14:21:34.402 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:34 compute-2 sudo[243770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:34 compute-2 sudo[243770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:34 compute-2 sudo[243770]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:34 compute-2 sudo[243795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:34 compute-2 sudo[243795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:34 compute-2 sudo[243795]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:34 compute-2 sudo[243801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:34 compute-2 sudo[243801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:34 compute-2 sudo[243801]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:34 compute-2 sudo[243845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:21:34 compute-2 sudo[243845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:34 compute-2 sudo[243845]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:35 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:35 compute-2 ceph-mon[77081]: pgmap v1605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:21:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:21:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:35.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:35 compute-2 nova_compute[226433]: 2026-01-22 14:21:35.631 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:21:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:21:36 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:36 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:36.252+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:37 compute-2 ceph-mon[77081]: pgmap v1606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:37 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:37.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:38 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:38.227+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:39 compute-2 podman[243873]: 2026-01-22 14:21:39.029638456 +0000 UTC m=+0.085806174 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:21:39 compute-2 ceph-mon[77081]: pgmap v1607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:39 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:39.224+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:39 compute-2 nova_compute[226433]: 2026-01-22 14:21:39.441 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:40 compute-2 nova_compute[226433]: 2026-01-22 14:21:40.097 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:40 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:40.265+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:40 compute-2 nova_compute[226433]: 2026-01-22 14:21:40.633 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:40 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:21:40.698 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:21:40 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:21:40.700 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:21:40 compute-2 nova_compute[226433]: 2026-01-22 14:21:40.699 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:41 compute-2 ceph-mon[77081]: pgmap v1608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:41 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:41.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:42.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:42.231+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:42 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:43.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:43 compute-2 nova_compute[226433]: 2026-01-22 14:21:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:43 compute-2 ceph-mon[77081]: pgmap v1609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:43 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:43 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:44.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:44.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:44 compute-2 nova_compute[226433]: 2026-01-22 14:21:44.443 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:44 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:44 compute-2 ceph-mon[77081]: pgmap v1610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:45.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:45 compute-2 nova_compute[226433]: 2026-01-22 14:21:45.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:45 compute-2 nova_compute[226433]: 2026-01-22 14:21:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:21:45 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:45 compute-2 nova_compute[226433]: 2026-01-22 14:21:45.635 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:46.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:46 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:46 compute-2 ceph-mon[77081]: pgmap v1611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:46 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:21:46.701 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:21:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:47.189+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:21:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:21:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:21:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:47 compute-2 nova_compute[226433]: 2026-01-22 14:21:47.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:47 compute-2 nova_compute[226433]: 2026-01-22 14:21:47.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:47 compute-2 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 14:21:47 compute-2 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:48.155+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:48 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:48 compute-2 ceph-mon[77081]: pgmap v1612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:49.154+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:49 compute-2 nova_compute[226433]: 2026-01-22 14:21:49.444 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:49 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:50.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.206 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.207 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.224 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.305 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.306 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.314 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.315 226437 INFO nova.compute.claims [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.445 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.467 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.468 226437 DEBUG nova.compute.provider_tree [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.482 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.503 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.545 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.545 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.571 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.596 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:50 compute-2 nova_compute[226433]: 2026-01-22 14:21:50.637 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:50 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:50 compute-2 ceph-mon[77081]: pgmap v1613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:21:51 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1686993375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.024 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.029 226437 DEBUG nova.compute.provider_tree [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.049 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.070 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.070 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.073 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.073 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.074 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.074 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.141 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.141 226437 DEBUG nova.network.neutron [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.165 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:21:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:51.175+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.185 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.285 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.286 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.287 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Creating image(s)
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.326 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.363 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.397 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.402 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.462 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.463 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.464 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.464 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:21:51 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1271797265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.493 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.496 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 8e98e700-52a4-44ff-8e11-9404cd11d871_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.512 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.573 226437 DEBUG nova.network.neutron [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.574 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.675 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.676 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4744MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.677 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.677 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:51 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1686993375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1271797265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.880 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 8e98e700-52a4-44ff-8e11-9404cd11d871_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:51 compute-2 nova_compute[226433]: 2026-01-22 14:21:51.948 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] resizing rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 22 14:21:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.053 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.053 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.060 226437 DEBUG nova.objects.instance [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.073 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.073 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Ensure instance console log exists: /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.074 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.074 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.074 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.076 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.079 226437 WARNING nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.087 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.088 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.096 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.097 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.098 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.098 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.099 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.099 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.099 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.101 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.101 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.101 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.104 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:52.167+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.199 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:21:52 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/910785072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.553 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.586 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.592 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:21:52 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2844275053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.621 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.626 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.647 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.680 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:21:52 compute-2 nova_compute[226433]: 2026-01-22 14:21:52.681 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:52 compute-2 ceph-mon[77081]: pgmap v1614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 299 MiB data, 375 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:21:52 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/910785072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:21:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2844275053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:21:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2190805920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.101 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.102 226437 DEBUG nova.objects.instance [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.122 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <uuid>8e98e700-52a4-44ff-8e11-9404cd11d871</uuid>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <name>instance-0000000d</name>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <memory>131072</memory>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:name>tempest-ServersOnMultiNodesTest-server-63037555</nova:name>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:21:52</nova:creationTime>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:flavor name="m1.nano">
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:memory>128</nova:memory>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:user uuid="a5be1e8103e142238ae4c912393095c4">tempest-ServersOnMultiNodesTest-59245381-project-member</nova:user>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <nova:project uuid="688eff2d52114848b8ae16c9cfaa49d9">tempest-ServersOnMultiNodesTest-59245381</nova:project>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <nova:ports/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <system>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <entry name="serial">8e98e700-52a4-44ff-8e11-9404cd11d871</entry>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <entry name="uuid">8e98e700-52a4-44ff-8e11-9404cd11d871</entry>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </system>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <os>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </os>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <features>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </features>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/8e98e700-52a4-44ff-8e11-9404cd11d871_disk">
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       </source>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <target dev="vda" bus="virtio"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config">
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       </source>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:21:53 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <target dev="sda" bus="sata"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/console.log" append="off"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <video>
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </video>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:21:53 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:21:53 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:21:53 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:21:53 compute-2 nova_compute[226433]: </domain>
Jan 22 14:21:53 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.178 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.178 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.179 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Using config drive
Jan 22 14:21:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:53.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.203 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.360 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Creating config drive at /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.364 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lnqu80d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.489 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lnqu80d" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.530 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.534 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.706 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:21:53 compute-2 nova_compute[226433]: 2026-01-22 14:21:53.707 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deleting local config drive /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config because it was imported into RBD.
Jan 22 14:21:53 compute-2 systemd[1]: Starting libvirt secret daemon...
Jan 22 14:21:53 compute-2 systemd[1]: Started libvirt secret daemon.
Jan 22 14:21:53 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 2702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:21:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2190805920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:21:53 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:53 compute-2 systemd-machined[194970]: New machine qemu-3-instance-0000000d.
Jan 22 14:21:53 compute-2 systemd[1]: Started Virtual Machine qemu-3-instance-0000000d.
Jan 22 14:21:53 compute-2 podman[244251]: 2026-01-22 14:21:53.880666044 +0000 UTC m=+0.122748111 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:21:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:54.171+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.402 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091714.4021378, 8e98e700-52a4-44ff-8e11-9404cd11d871 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.404 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] VM Resumed (Lifecycle Event)
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.406 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.407 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.411 226437 INFO nova.virt.libvirt.driver [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance spawned successfully.
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.411 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.427 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.433 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.437 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.438 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.438 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.439 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.439 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.440 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.447 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.466 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.466 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091714.4036539, 8e98e700-52a4-44ff-8e11-9404cd11d871 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.467 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] VM Started (Lifecycle Event)
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.487 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.491 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.495 226437 INFO nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 3.21 seconds to spawn the instance on the hypervisor.
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.495 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.517 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.549 226437 INFO nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 4.28 seconds to build instance.
Jan 22 14:21:54 compute-2 nova_compute[226433]: 2026-01-22 14:21:54.562 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:21:54 compute-2 sudo[244355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:54 compute-2 sudo[244355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:54 compute-2 sudo[244355]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:54 compute-2 sudo[244380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:21:54 compute-2 sudo[244380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:21:54 compute-2 sudo[244380]: pam_unix(sudo:session): session closed for user root
Jan 22 14:21:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:55.198+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:55 compute-2 ceph-mon[77081]: pgmap v1615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 313 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 756 KiB/s wr, 14 op/s
Jan 22 14:21:55 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/699726964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:55 compute-2 nova_compute[226433]: 2026-01-22 14:21:55.639 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:21:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:56 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3661583463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:21:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:56.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:56 compute-2 nova_compute[226433]: 2026-01-22 14:21:56.652 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:21:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:57.157+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:57 compute-2 ceph-mon[77081]: pgmap v1616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 14:21:57 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:21:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:21:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:58.149+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:58 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:21:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:21:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:21:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:21:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:59.134+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:21:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:59 compute-2 ceph-mon[77081]: pgmap v1617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 22 14:21:59 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:21:59 compute-2 nova_compute[226433]: 2026-01-22 14:21:59.449 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:00.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:00.085+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:00.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:00 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:00 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3894054400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:00 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:22:00 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:22:00 compute-2 nova_compute[226433]: 2026-01-22 14:22:00.643 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:01.130+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:01 compute-2 ceph-mon[77081]: pgmap v1618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 22 14:22:01 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:01 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1264698857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:02.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:02.089+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:02.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:02 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1780644383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:02 compute-2 ceph-mon[77081]: pgmap v1619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 345 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:22:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/336972401' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:02 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 2707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:03.090+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:22:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/785481063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2130573325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:22:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:04.116+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:04 compute-2 nova_compute[226433]: 2026-01-22 14:22:04.506 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:04 compute-2 nova_compute[226433]: 2026-01-22 14:22:04.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:04 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:04 compute-2 ceph-mon[77081]: pgmap v1620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 369 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Jan 22 14:22:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:05.152+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:05 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:05 compute-2 nova_compute[226433]: 2026-01-22 14:22:05.679 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:06.196+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:06.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:06 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:06 compute-2 ceph-mon[77081]: pgmap v1621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 438 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 150 op/s
Jan 22 14:22:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:07.147+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:07 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:07 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2717 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:08.160+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:08 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:08 compute-2 ceph-mon[77081]: pgmap v1622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 438 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 119 op/s
Jan 22 14:22:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:09.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:09 compute-2 nova_compute[226433]: 2026-01-22 14:22:09.509 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:09 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:09 compute-2 podman[244413]: 2026-01-22 14:22:09.995271672 +0000 UTC m=+0.058452830 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 14:22:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:10.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:10.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:10 compute-2 nova_compute[226433]: 2026-01-22 14:22:10.681 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:10 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:10 compute-2 ceph-mon[77081]: pgmap v1623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 450 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 172 op/s
Jan 22 14:22:10 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:11.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:12.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:12.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:22:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:22:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:13.187+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:14 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:14 compute-2 ceph-mon[77081]: pgmap v1624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 213 op/s
Jan 22 14:22:14 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:14 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2722 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 14:22:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:14.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 14:22:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:14.138+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:22:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:14.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:22:14 compute-2 nova_compute[226433]: 2026-01-22 14:22:14.511 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:14 compute-2 sudo[244435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:14 compute-2 sudo[244435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:14 compute-2 sudo[244435]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:14 compute-2 sudo[244460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:14 compute-2 sudo[244460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:14 compute-2 sudo[244460]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:15 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:15 compute-2 ceph-mon[77081]: pgmap v1625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Jan 22 14:22:15 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:15.145+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:15 compute-2 nova_compute[226433]: 2026-01-22 14:22:15.683 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:22:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:16.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:22:16 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:16.105+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:17 compute-2 ceph-mon[77081]: pgmap v1626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 166 op/s
Jan 22 14:22:17 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:17.062+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:18.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:18.057+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:18 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:18 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2727 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:18.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:19.077+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:19 compute-2 ceph-mon[77081]: pgmap v1627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 22 14:22:19 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:22:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2332942019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:22:19 compute-2 nova_compute[226433]: 2026-01-22 14:22:19.513 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:20.063+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:20.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:20 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:20 compute-2 ceph-mon[77081]: pgmap v1628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 22 14:22:20 compute-2 nova_compute[226433]: 2026-01-22 14:22:20.685 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:21.019+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:21 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:21.987+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:22:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:22:22 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:22 compute-2 ceph-mon[77081]: pgmap v1629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Jan 22 14:22:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:22.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:23 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:23 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:23.980+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:24 compute-2 podman[244491]: 2026-01-22 14:22:24.026522751 +0000 UTC m=+0.087759392 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:22:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:22:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:24.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:22:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 22 14:22:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:24.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 22 14:22:24 compute-2 sshd-session[244489]: Invalid user ubuntu from 92.118.39.95 port 54918
Jan 22 14:22:24 compute-2 nova_compute[226433]: 2026-01-22 14:22:24.516 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:24 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:24 compute-2 ceph-mon[77081]: pgmap v1630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 3 op/s
Jan 22 14:22:24 compute-2 sshd-session[244489]: Connection closed by invalid user ubuntu 92.118.39.95 port 54918 [preauth]
Jan 22 14:22:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:24.942+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.032 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "8331b067-1b3f-4a1d-a596-e966f6de776a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.033 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "8331b067-1b3f-4a1d-a596-e966f6de776a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.051 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.134 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.135 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.144 226437 DEBUG nova.virt.hardware [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.145 226437 INFO nova.compute.claims [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.355 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:22:25 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.732 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:22:25 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4155437798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.840 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.846 226437 DEBUG nova.compute.provider_tree [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.868 226437 DEBUG nova.scheduler.client.report [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.903 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.904 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.955 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.956 226437 DEBUG nova.network.neutron [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:22:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:25.973+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:25 compute-2 nova_compute[226433]: 2026-01-22 14:22:25.979 226437 INFO nova.virt.libvirt.driver [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.003 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:22:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.097 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.098 226437 DEBUG nova.virt.libvirt.driver [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.099 226437 INFO nova.virt.libvirt.driver [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Creating image(s)
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.125 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.152 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.177 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.180 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.231 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.232 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.233 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.233 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:22:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.256 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.260 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 8331b067-1b3f-4a1d-a596-e966f6de776a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:22:26 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:26 compute-2 ceph-mon[77081]: pgmap v1631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 3 op/s
Jan 22 14:22:26 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4155437798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.687 226437 DEBUG nova.network.neutron [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 22 14:22:26 compute-2 nova_compute[226433]: 2026-01-22 14:22:26.688 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:22:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:26.930+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:27 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:27.946+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:28 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:28 compute-2 ceph-mon[77081]: pgmap v1632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 471 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 3 op/s
Jan 22 14:22:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:28.947+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:29 compute-2 nova_compute[226433]: 2026-01-22 14:22:29.518 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:29 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:29.905+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:30 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:30 compute-2 ceph-mon[77081]: pgmap v1633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 479 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 351 KiB/s wr, 6 op/s
Jan 22 14:22:30 compute-2 nova_compute[226433]: 2026-01-22 14:22:30.735 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:30.939+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:31 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:31.900+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:32.862+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:33 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:33 compute-2 ceph-mon[77081]: pgmap v1634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 17 op/s
Jan 22 14:22:33 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:33.899+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:34 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:22:34 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:22:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:22:34 compute-2 nova_compute[226433]: 2026-01-22 14:22:34.520 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:34 compute-2 sudo[244640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:34 compute-2 sudo[244640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:34 compute-2 sudo[244640]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:34 compute-2 sudo[244665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:22:34 compute-2 sudo[244665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:34 compute-2 sudo[244665]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:34 compute-2 sudo[244690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:34 compute-2 sudo[244690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:34 compute-2 sudo[244690]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:34.913+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:34 compute-2 sudo[244715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:22:34 compute-2 sudo[244715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:35 compute-2 sudo[244740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:35 compute-2 sudo[244740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:35 compute-2 sudo[244740]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:35 compute-2 ceph-mon[77081]: pgmap v1635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:22:35 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:35 compute-2 sudo[244765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:35 compute-2 sudo[244765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:35 compute-2 sudo[244765]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:35 compute-2 sudo[244715]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:35 compute-2 nova_compute[226433]: 2026-01-22 14:22:35.774 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:35.948+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:36 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:22:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:22:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:22:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:22:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:22:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:22:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:36.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:37 compute-2 ceph-mon[77081]: pgmap v1636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:22:37 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:37.968+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:38.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:38 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:38 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:38.996+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:39 compute-2 ceph-mon[77081]: pgmap v1637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 15 op/s
Jan 22 14:22:39 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:39 compute-2 nova_compute[226433]: 2026-01-22 14:22:39.522 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:40.008+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:40 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:40 compute-2 ceph-mon[77081]: pgmap v1638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 22 14:22:40 compute-2 nova_compute[226433]: 2026-01-22 14:22:40.776 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:40.968+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:40 compute-2 podman[244823]: 2026-01-22 14:22:40.991351203 +0000 UTC m=+0.052163416 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 14:22:41 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:22:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:22:41 compute-2 nova_compute[226433]: 2026-01-22 14:22:41.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:41 compute-2 sudo[244842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:41 compute-2 sudo[244842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:41 compute-2 sudo[244842]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:41 compute-2 sudo[244867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:22:41 compute-2 sudo[244867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:41 compute-2 sudo[244867]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:41.933+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:42.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:42 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:42 compute-2 ceph-mon[77081]: pgmap v1639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 14:22:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:42.962+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:43 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:43 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:43 compute-2 nova_compute[226433]: 2026-01-22 14:22:43.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:43 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:22:43.890 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:22:43 compute-2 nova_compute[226433]: 2026-01-22 14:22:43.890 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:43 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:22:43.891 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:22:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:44.002+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:22:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:44.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:22:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:44 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:44 compute-2 ceph-mon[77081]: pgmap v1640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:44 compute-2 nova_compute[226433]: 2026-01-22 14:22:44.525 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:45.005+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:45 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:45 compute-2 nova_compute[226433]: 2026-01-22 14:22:45.835 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:46.035+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:46.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:46.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:46 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:46 compute-2 ceph-mon[77081]: pgmap v1641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:46 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:22:46.893 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:22:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:47.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:22:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:22:47.196 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:22:47.196 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.289939) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767290034, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 256, "total_data_size": 3946951, "memory_usage": 4011536, "flush_reason": "Manual Compaction"}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767306796, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 2581347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45533, "largest_seqno": 47577, "table_properties": {"data_size": 2573555, "index_size": 4350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19900, "raw_average_key_size": 21, "raw_value_size": 2556335, "raw_average_value_size": 2699, "num_data_blocks": 188, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091627, "oldest_key_time": 1769091627, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 16902 microseconds, and 6142 cpu microseconds.
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.306853) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 2581347 bytes OK
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.306873) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.308728) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.308746) EVENT_LOG_v1 {"time_micros": 1769091767308741, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.308765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3937476, prev total WAL file size 3937476, number of live WAL files 2.
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(2520KB)], [87(9671KB)]
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767309821, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 12485000, "oldest_snapshot_seqno": -1}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 9074 keys, 12329262 bytes, temperature: kUnknown
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767390460, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 12329262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12273894, "index_size": 31576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 242720, "raw_average_key_size": 26, "raw_value_size": 12113724, "raw_average_value_size": 1334, "num_data_blocks": 1217, "num_entries": 9074, "num_filter_entries": 9074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.390742) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12329262 bytes
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.392054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.6 rd, 152.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.4 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 9599, records dropped: 525 output_compression: NoCompression
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.392068) EVENT_LOG_v1 {"time_micros": 1769091767392061, "job": 54, "event": "compaction_finished", "compaction_time_micros": 80754, "compaction_time_cpu_micros": 27020, "output_level": 6, "num_output_files": 1, "total_output_size": 12329262, "num_input_records": 9599, "num_output_records": 9074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767392516, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767393976, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:22:47 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:47 compute-2 nova_compute[226433]: 2026-01-22 14:22:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:47 compute-2 nova_compute[226433]: 2026-01-22 14:22:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:47 compute-2 nova_compute[226433]: 2026-01-22 14:22:47.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:22:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:48.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:48.077+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:48.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:48 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:48 compute-2 ceph-mon[77081]: pgmap v1642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:48 compute-2 nova_compute[226433]: 2026-01-22 14:22:48.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:49.074+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:49 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:49 compute-2 nova_compute[226433]: 2026-01-22 14:22:49.527 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:50.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:50.103+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:50 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:50 compute-2 ceph-mon[77081]: pgmap v1643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s wr, 1 op/s
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.542 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.542 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:22:50 compute-2 nova_compute[226433]: 2026-01-22 14:22:50.836 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:51.143+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:51 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:51 compute-2 nova_compute[226433]: 2026-01-22 14:22:51.510 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:22:51 compute-2 nova_compute[226433]: 2026-01-22 14:22:51.510 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:22:51 compute-2 nova_compute[226433]: 2026-01-22 14:22:51.511 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:22:51 compute-2 nova_compute[226433]: 2026-01-22 14:22:51.511 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:22:51 compute-2 nova_compute[226433]: 2026-01-22 14:22:51.785 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:22:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:52.152+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:52 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:52 compute-2 ceph-mon[77081]: pgmap v1644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 14:22:52 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.779 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.800 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.801 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.802 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.802 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.832 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:22:52 compute-2 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:22:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:53.125+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:22:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4134154296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.285 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.391 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.391 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:22:53 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4134154296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.588 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.589 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4544MB free_disk=20.771656036376953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.590 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.590 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.683 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.683 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:22:53 compute-2 nova_compute[226433]: 2026-01-22 14:22:53.805 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:22:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:54.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:54.123+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:22:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3543596929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:54 compute-2 nova_compute[226433]: 2026-01-22 14:22:54.249 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:22:54 compute-2 nova_compute[226433]: 2026-01-22 14:22:54.256 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:22:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:54 compute-2 nova_compute[226433]: 2026-01-22 14:22:54.274 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:22:54 compute-2 nova_compute[226433]: 2026-01-22 14:22:54.294 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:22:54 compute-2 nova_compute[226433]: 2026-01-22 14:22:54.294 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:22:54 compute-2 nova_compute[226433]: 2026-01-22 14:22:54.528 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:54 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:54 compute-2 ceph-mon[77081]: pgmap v1645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:22:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3543596929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:55 compute-2 podman[244944]: 2026-01-22 14:22:55.036900934 +0000 UTC m=+0.092069023 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 14:22:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:55.132+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:55 compute-2 sudo[244972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:55 compute-2 sudo[244972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:55 compute-2 sudo[244972]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:55 compute-2 sudo[244997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:22:55 compute-2 sudo[244997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:22:55 compute-2 sudo[244997]: pam_unix(sudo:session): session closed for user root
Jan 22 14:22:55 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:55 compute-2 nova_compute[226433]: 2026-01-22 14:22:55.879 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:56 compute-2 nova_compute[226433]: 2026-01-22 14:22:56.009 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:22:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:22:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:22:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:56.107+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:56 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:56 compute-2 ceph-mon[77081]: pgmap v1646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:22:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:57.092+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:22:57 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:22:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/542191379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:57 compute-2 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:22:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3278762935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:22:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:58.140+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:22:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:22:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:22:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:58.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:22:58 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:22:58 compute-2 ceph-mon[77081]: pgmap v1647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:22:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:22:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:59.099+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:22:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:22:59 compute-2 nova_compute[226433]: 2026-01-22 14:22:59.530 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:22:59 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:00.076+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:00 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:00 compute-2 ceph-mon[77081]: pgmap v1648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:00 compute-2 nova_compute[226433]: 2026-01-22 14:23:00.881 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:01.047+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:01 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:02.043+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:02.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:02 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:02 compute-2 ceph-mon[77081]: pgmap v1649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:03.008+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:03 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:03 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:04.040+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:04 compute-2 nova_compute[226433]: 2026-01-22 14:23:04.533 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:04 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:04 compute-2 ceph-mon[77081]: pgmap v1650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:05.004+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:05 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:05 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:05 compute-2 nova_compute[226433]: 2026-01-22 14:23:05.936 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:05.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:06 compute-2 ceph-mon[77081]: pgmap v1651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:06 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:06.912+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:07 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:07.924+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:08 compute-2 ceph-mon[77081]: pgmap v1652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:08 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:08.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:09 compute-2 nova_compute[226433]: 2026-01-22 14:23:09.534 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:09 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:09.910+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:10.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:10 compute-2 ceph-mon[77081]: pgmap v1653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:10 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:10.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:10 compute-2 nova_compute[226433]: 2026-01-22 14:23:10.940 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:11 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:11.919+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:11 compute-2 podman[245030]: 2026-01-22 14:23:11.989327433 +0000 UTC m=+0.053358649 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:23:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:12.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:12 compute-2 ceph-mon[77081]: pgmap v1654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:12 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:12 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:12.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:13 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:13.942+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:14 compute-2 nova_compute[226433]: 2026-01-22 14:23:14.536 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:14 compute-2 ceph-mon[77081]: pgmap v1655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:14 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:14.943+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:15 compute-2 sudo[245051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:15 compute-2 sudo[245051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:15 compute-2 sudo[245051]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:15 compute-2 sudo[245076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:15 compute-2 sudo[245076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:15 compute-2 sudo[245076]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:15.900+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:15 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:15 compute-2 nova_compute[226433]: 2026-01-22 14:23:15.941 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:16.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:16.909+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:16 compute-2 ceph-mon[77081]: pgmap v1656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:16 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:17.944+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:17 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:17 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:17.987426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797987485, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 662, "num_deletes": 251, "total_data_size": 873899, "memory_usage": 885624, "flush_reason": "Manual Compaction"}
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797997747, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 573365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47582, "largest_seqno": 48239, "table_properties": {"data_size": 570254, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8063, "raw_average_key_size": 19, "raw_value_size": 563778, "raw_average_value_size": 1375, "num_data_blocks": 42, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091767, "oldest_key_time": 1769091767, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 10364 microseconds, and 4208 cpu microseconds.
Jan 22 14:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:17.997797) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 573365 bytes OK
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:17.997820) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.003885) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.003913) EVENT_LOG_v1 {"time_micros": 1769091798003906, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.003937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 870214, prev total WAL file size 870214, number of live WAL files 2.
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.004716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(559KB)], [90(11MB)]
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798004764, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 12902627, "oldest_snapshot_seqno": -1}
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8974 keys, 11173637 bytes, temperature: kUnknown
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798082010, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 11173637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11119869, "index_size": 30232, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 241529, "raw_average_key_size": 26, "raw_value_size": 10962080, "raw_average_value_size": 1221, "num_data_blocks": 1156, "num_entries": 8974, "num_filter_entries": 8974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.082353) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11173637 bytes
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.084362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 144.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(42.0) write-amplify(19.5) OK, records in: 9484, records dropped: 510 output_compression: NoCompression
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.084393) EVENT_LOG_v1 {"time_micros": 1769091798084380, "job": 56, "event": "compaction_finished", "compaction_time_micros": 77337, "compaction_time_cpu_micros": 25005, "output_level": 6, "num_output_files": 1, "total_output_size": 11173637, "num_input_records": 9484, "num_output_records": 8974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798084716, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798088725, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.004653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:23:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:23:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:18.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:23:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:18.955+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:18 compute-2 ceph-mon[77081]: pgmap v1657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:18 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1035172847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:23:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1035172847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:23:19 compute-2 nova_compute[226433]: 2026-01-22 14:23:19.538 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:19.907+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:20 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:20.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:20.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:20 compute-2 nova_compute[226433]: 2026-01-22 14:23:20.945 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:21 compute-2 ceph-mon[77081]: pgmap v1658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:21 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:21.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:22.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:22.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:22 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:22 compute-2 ceph-mon[77081]: pgmap v1659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:22.949+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:23 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:23 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:23.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:24.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:24 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:24 compute-2 ceph-mon[77081]: pgmap v1660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:24 compute-2 nova_compute[226433]: 2026-01-22 14:23:24.540 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:24.957+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:25 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:25.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:25 compute-2 nova_compute[226433]: 2026-01-22 14:23:25.948 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:26 compute-2 podman[245107]: 2026-01-22 14:23:26.04374261 +0000 UTC m=+0.095013042 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:23:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:26.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:26 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:26 compute-2 ceph-mon[77081]: pgmap v1661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:26.967+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:27 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:27.974+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:28 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:28 compute-2 ceph-mon[77081]: pgmap v1662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:29.019+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:29 compute-2 nova_compute[226433]: 2026-01-22 14:23:29.542 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:29 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:30.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:30.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:30 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:30 compute-2 ceph-mon[77081]: pgmap v1663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:30 compute-2 nova_compute[226433]: 2026-01-22 14:23:30.951 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:31.040+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:31 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:32.032+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:23:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:23:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:32 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:32 compute-2 ceph-mon[77081]: pgmap v1664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:32 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:33.012+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:33 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:34.041+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:34 compute-2 nova_compute[226433]: 2026-01-22 14:23:34.544 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:34 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:34 compute-2 ceph-mon[77081]: pgmap v1665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:34 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:35.015+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:35 compute-2 sudo[245137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:35 compute-2 sudo[245137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:35 compute-2 sudo[245137]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:35 compute-2 sudo[245162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:35 compute-2 sudo[245162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:35 compute-2 sudo[245162]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:35 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:35 compute-2 nova_compute[226433]: 2026-01-22 14:23:35.954 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:36.021+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:36 compute-2 ceph-mon[77081]: pgmap v1666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:36 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:37.066+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:37 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:37 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:38.083+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:38.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:38.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:38 compute-2 ceph-mon[77081]: pgmap v1667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:38 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:39.067+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:39 compute-2 sshd-session[245189]: Invalid user ethereum from 45.148.10.240 port 43560
Jan 22 14:23:39 compute-2 sshd-session[245189]: Connection closed by invalid user ethereum 45.148.10.240 port 43560 [preauth]
Jan 22 14:23:39 compute-2 nova_compute[226433]: 2026-01-22 14:23:39.546 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:39 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:40.088+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:40.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:40 compute-2 ceph-mon[77081]: pgmap v1668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:40 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:40 compute-2 nova_compute[226433]: 2026-01-22 14:23:40.956 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:41.054+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:41 compute-2 sudo[245192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:41 compute-2 sudo[245192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:41 compute-2 sudo[245192]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:41 compute-2 sudo[245217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:23:41 compute-2 sudo[245217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:41 compute-2 sudo[245217]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:41 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:41 compute-2 sudo[245242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:41 compute-2 sudo[245242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:41 compute-2 sudo[245242]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:41 compute-2 sudo[245267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:23:41 compute-2 sudo[245267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:42.008+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:42 compute-2 sudo[245267]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:42 compute-2 ceph-mon[77081]: pgmap v1669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:42 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:42 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:42 compute-2 podman[245324]: 2026-01-22 14:23:42.989431847 +0000 UTC m=+0.053800770 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 14:23:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:43.042+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:43 compute-2 nova_compute[226433]: 2026-01-22 14:23:43.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:44.046+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:44.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:44 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:44 compute-2 ceph-mon[77081]: pgmap v1670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:23:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:23:44 compute-2 nova_compute[226433]: 2026-01-22 14:23:44.549 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:45.000+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:45 compute-2 nova_compute[226433]: 2026-01-22 14:23:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:45 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:45 compute-2 nova_compute[226433]: 2026-01-22 14:23:45.959 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:45.965+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:46.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:46 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:46 compute-2 ceph-mon[77081]: pgmap v1671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:46.968+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:23:47.197 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:23:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:23:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:23:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:23:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:23:47 compute-2 nova_compute[226433]: 2026-01-22 14:23:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:47 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:47 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:47.925+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:48.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:48 compute-2 nova_compute[226433]: 2026-01-22 14:23:48.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:48 compute-2 nova_compute[226433]: 2026-01-22 14:23:48.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:23:48 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:48 compute-2 ceph-mon[77081]: pgmap v1672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:48.955+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:49 compute-2 nova_compute[226433]: 2026-01-22 14:23:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:49 compute-2 nova_compute[226433]: 2026-01-22 14:23:49.551 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:49 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:50.004+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:50 compute-2 sudo[245346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:50 compute-2 sudo[245346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:50 compute-2 sudo[245346]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:50.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:50 compute-2 sudo[245371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:23:50 compute-2 sudo[245371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:50 compute-2 sudo[245371]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:50 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:50 compute-2 ceph-mon[77081]: pgmap v1673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:23:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:50.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:50 compute-2 nova_compute[226433]: 2026-01-22 14:23:50.962 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:23:51 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:51.921+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.927 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.927 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.928 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:23:51 compute-2 nova_compute[226433]: 2026-01-22 14:23:51.928 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:23:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:52 compute-2 nova_compute[226433]: 2026-01-22 14:23:52.217 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:23:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:52.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:52 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:52 compute-2 ceph-mon[77081]: pgmap v1674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:52.920+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.007 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.055 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.056 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.056 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.108 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.108 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.109 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.109 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.109 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:23:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:23:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/830754578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.547 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.632 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.632 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:23:53 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:53 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:23:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/830754578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.801 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.802 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4508MB free_disk=20.771656036376953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.802 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.802 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:23:53 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 14:23:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.941 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.942 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.942 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.943 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.943 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.943 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:23:53 compute-2 nova_compute[226433]: 2026-01-22 14:23:53.944 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:23:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:53.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.084 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:23:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:23:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2625343517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.549 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.557 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.579 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.580 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.580 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:23:54 compute-2 nova_compute[226433]: 2026-01-22 14:23:54.590 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:54 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:54 compute-2 ceph-mon[77081]: pgmap v1675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2625343517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:54.972+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:55 compute-2 sudo[245444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:55 compute-2 sudo[245444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:55 compute-2 sudo[245444]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:55 compute-2 sudo[245469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:23:55 compute-2 sudo[245469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:23:55 compute-2 sudo[245469]: pam_unix(sudo:session): session closed for user root
Jan 22 14:23:55 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:55.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:55 compute-2 nova_compute[226433]: 2026-01-22 14:23:55.964 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:56 compute-2 nova_compute[226433]: 2026-01-22 14:23:56.041 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:56 compute-2 nova_compute[226433]: 2026-01-22 14:23:56.041 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:23:56 compute-2 nova_compute[226433]: 2026-01-22 14:23:56.086 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:23:56.086 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:23:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:23:56.088 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:23:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:56.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:56 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:56 compute-2 ceph-mon[77081]: pgmap v1676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:56.961+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:57 compute-2 podman[245495]: 2026-01-22 14:23:57.012014413 +0000 UTC m=+0.076125457 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:23:57 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2827271453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:57.915+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:23:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:23:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:23:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:23:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:58.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:23:58 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:58 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2804744684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:23:58 compute-2 ceph-mon[77081]: pgmap v1677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:23:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:23:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:58.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:59 compute-2 nova_compute[226433]: 2026-01-22 14:23:59.591 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:23:59 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:23:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:59.979+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:23:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:00 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:00 compute-2 ceph-mon[77081]: pgmap v1678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:00 compute-2 nova_compute[226433]: 2026-01-22 14:24:00.967 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:01.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:01 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:01 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:01.992+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:02 compute-2 ceph-mon[77081]: pgmap v1679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:02 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:02 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:02.967+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:03 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:03.976+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:04 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:24:04.090 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:24:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:04 compute-2 nova_compute[226433]: 2026-01-22 14:24:04.646 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:04 compute-2 ceph-mon[77081]: pgmap v1680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:04 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:04.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:05 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:05.981+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:05 compute-2 nova_compute[226433]: 2026-01-22 14:24:05.987 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:06.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:06 compute-2 nova_compute[226433]: 2026-01-22 14:24:06.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:06 compute-2 ceph-mon[77081]: pgmap v1681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:06 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:06.973+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:07 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:07 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:07.939+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:08 compute-2 ceph-mon[77081]: pgmap v1682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:08 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:08.953+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:09 compute-2 nova_compute[226433]: 2026-01-22 14:24:09.647 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:09.949+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:10 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:10.973+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:10 compute-2 nova_compute[226433]: 2026-01-22 14:24:10.989 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:11 compute-2 ceph-mon[77081]: pgmap v1683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:11 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:11.992+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:12 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:12 compute-2 ceph-mon[77081]: pgmap v1684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:12.979+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:13 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:13 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:13 compute-2 podman[245529]: 2026-01-22 14:24:13.998113935 +0000 UTC m=+0.064252360 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 14:24:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:14.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:14.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:14 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:14 compute-2 ceph-mon[77081]: pgmap v1685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:14 compute-2 nova_compute[226433]: 2026-01-22 14:24:14.650 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:15.064+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:15 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:15 compute-2 sudo[245550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:15 compute-2 sudo[245550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:15 compute-2 sudo[245550]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:15 compute-2 sudo[245575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:15 compute-2 sudo[245575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:15 compute-2 sudo[245575]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:15 compute-2 nova_compute[226433]: 2026-01-22 14:24:15.993 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:16.058+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:24:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:24:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:16 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:16 compute-2 ceph-mon[77081]: pgmap v1686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:17.043+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.085 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.086 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "a0b3924b-4422-47c5-ba40-748e41b14d00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.109 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.207 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.207 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.215 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.215 226437 INFO nova.compute.claims [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.444 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:17 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:24:17 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/446750844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.874 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.879 226437 DEBUG nova.compute.provider_tree [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.901 226437 DEBUG nova.scheduler.client.report [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.932 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.933 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.988 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:24:17 compute-2 nova_compute[226433]: 2026-01-22 14:24:17.988 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.020 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.055 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:24:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:18.071+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.183 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.185 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.185 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Creating image(s)
Jan 22 14:24:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.221 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.266 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.311 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.319 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.385 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.387 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.387 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.388 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.423 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.428 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 a0b3924b-4422-47c5-ba40-748e41b14d00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:18 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:18 compute-2 ceph-mon[77081]: pgmap v1687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:24:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/446750844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4024552461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:24:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4024552461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.659 226437 DEBUG nova.policy [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b8229aedbc64b9691880a91d559e987', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7efa67e548af42419a603e06c3b85f6d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.703 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 a0b3924b-4422-47c5-ba40-748e41b14d00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.805 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] resizing rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 22 14:24:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.921 226437 DEBUG nova.objects.instance [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lazy-loading 'migration_context' on Instance uuid a0b3924b-4422-47c5-ba40-748e41b14d00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.946 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.947 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Ensure instance console log exists: /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.948 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.948 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:18 compute-2 nova_compute[226433]: 2026-01-22 14:24:18.948 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:19.094+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:19 compute-2 nova_compute[226433]: 2026-01-22 14:24:19.651 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:19 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:20.128+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:20 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:20 compute-2 ceph-mon[77081]: pgmap v1688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 524 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 597 B/s rd, 475 KiB/s wr, 0 op/s
Jan 22 14:24:20 compute-2 nova_compute[226433]: 2026-01-22 14:24:20.759 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Successfully updated port: 982269cf-4df1-4bc7-9b49-f0de807afdd7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:24:20 compute-2 nova_compute[226433]: 2026-01-22 14:24:20.783 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:24:20 compute-2 nova_compute[226433]: 2026-01-22 14:24:20.784 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquired lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:24:20 compute-2 nova_compute[226433]: 2026-01-22 14:24:20.784 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:24:20 compute-2 nova_compute[226433]: 2026-01-22 14:24:20.996 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:21 compute-2 nova_compute[226433]: 2026-01-22 14:24:21.145 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:24:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:21.158+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:21 compute-2 nova_compute[226433]: 2026-01-22 14:24:21.346 226437 DEBUG nova.compute.manager [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Received event network-changed-982269cf-4df1-4bc7-9b49-f0de807afdd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:24:21 compute-2 nova_compute[226433]: 2026-01-22 14:24:21.347 226437 DEBUG nova.compute.manager [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Refreshing instance network info cache due to event network-changed-982269cf-4df1-4bc7-9b49-f0de807afdd7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:24:21 compute-2 nova_compute[226433]: 2026-01-22 14:24:21.347 226437 DEBUG oslo_concurrency.lockutils [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:24:21 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:22.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:22 compute-2 nova_compute[226433]: 2026-01-22 14:24:22.756 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Updating instance_info_cache with network_info: [{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:24:22 compute-2 nova_compute[226433]: 2026-01-22 14:24:22.789 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Releasing lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:24:22 compute-2 nova_compute[226433]: 2026-01-22 14:24:22.789 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Instance network_info: |[{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:24:22 compute-2 nova_compute[226433]: 2026-01-22 14:24:22.790 226437 DEBUG oslo_concurrency.lockutils [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:24:22 compute-2 nova_compute[226433]: 2026-01-22 14:24:22.790 226437 DEBUG nova.network.neutron [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Refreshing network info cache for port 982269cf-4df1-4bc7-9b49-f0de807afdd7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:24:22 compute-2 nova_compute[226433]: 2026-01-22 14:24:22.795 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start _get_guest_xml network_info=[{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.031 226437 WARNING nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:24:23 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:23 compute-2 ceph-mon[77081]: pgmap v1689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:23 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.037 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.038 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.042 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.042 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.044 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.044 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.044 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.047 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.050 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:23.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:24:23 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2112882646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.508 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.531 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.537 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:24:23 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1383228421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.990 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.992 226437 DEBUG nova.virt.libvirt.vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:24:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1971220718',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1971220718',id=17,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7efa67e548af42419a603e06c3b85f6d',ramdisk_id='',reservation_id='r-ongku9tq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1914209315',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1914209315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:24:18Z,user_data=None,user_id='3b8229aedbc64b9691880a91d559e987',uuid=a0b3924b-4422-47c5-ba40-748e41b14d00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.993 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converting VIF {"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.994 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:24:23 compute-2 nova_compute[226433]: 2026-01-22 14:24:23.995 226437 DEBUG nova.objects.instance [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid a0b3924b-4422-47c5-ba40-748e41b14d00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.017 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <uuid>a0b3924b-4422-47c5-ba40-748e41b14d00</uuid>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <name>instance-00000011</name>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <memory>131072</memory>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1971220718</nova:name>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:24:23</nova:creationTime>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:flavor name="m1.nano">
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:memory>128</nova:memory>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:user uuid="3b8229aedbc64b9691880a91d559e987">tempest-LiveAutoBlockMigrationV225Test-1914209315-project-member</nova:user>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:project uuid="7efa67e548af42419a603e06c3b85f6d">tempest-LiveAutoBlockMigrationV225Test-1914209315</nova:project>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <nova:ports>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <nova:port uuid="982269cf-4df1-4bc7-9b49-f0de807afdd7">
Jan 22 14:24:24 compute-2 nova_compute[226433]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         </nova:port>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </nova:ports>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <system>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <entry name="serial">a0b3924b-4422-47c5-ba40-748e41b14d00</entry>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <entry name="uuid">a0b3924b-4422-47c5-ba40-748e41b14d00</entry>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </system>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <os>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </os>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <features>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </features>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/a0b3924b-4422-47c5-ba40-748e41b14d00_disk">
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </source>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <target dev="vda" bus="virtio"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config">
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </source>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:24:24 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <target dev="sda" bus="sata"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <interface type="ethernet">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <mac address="fa:16:3e:03:98:da"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <driver name="vhost" rx_queue_size="512"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <mtu size="1442"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <target dev="tap982269cf-4d"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </interface>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/console.log" append="off"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <video>
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </video>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:24:24 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:24:24 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:24:24 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:24:24 compute-2 nova_compute[226433]: </domain>
Jan 22 14:24:24 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.018 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Preparing to wait for external event network-vif-plugged-982269cf-4df1-4bc7-9b49-f0de807afdd7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.019 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.019 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "a0b3924b-4422-47c5-ba40-748e41b14d00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.020 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "a0b3924b-4422-47c5-ba40-748e41b14d00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.020 226437 DEBUG nova.virt.libvirt.vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:24:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1971220718',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1971220718',id=17,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7efa67e548af42419a603e06c3b85f6d',ramdisk_id='',reservation_id='r-ongku9tq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1914209315',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1914209315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:24:18Z,user_data=None,user_id='3b8229aedbc64b9691880a91d559e987',uuid=a0b3924b-4422-47c5-ba40-748e41b14d00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.021 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converting VIF {"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.021 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.022 226437 DEBUG os_vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.023 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.023 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.024 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.027 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.027 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap982269cf-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.027 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap982269cf-4d, col_values=(('external_ids', {'iface-id': '982269cf-4df1-4bc7-9b49-f0de807afdd7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:98:da', 'vm-uuid': 'a0b3924b-4422-47c5-ba40-748e41b14d00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:24:24 compute-2 NetworkManager[49000]: <info>  [1769091864.0304] manager: (tap982269cf-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.029 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.033 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.036 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.036 226437 INFO os_vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d')
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.085 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.086 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.086 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] No VIF found with MAC fa:16:3e:03:98:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.086 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Using config drive
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.106 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:24 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:24 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:24 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2112882646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:24:24 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1383228421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:24:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:24.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.689 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.750 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Creating config drive at /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.755 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzsoys_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.797 226437 DEBUG nova.network.neutron [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Updated VIF entry in instance network info cache for port 982269cf-4df1-4bc7-9b49-f0de807afdd7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.798 226437 DEBUG nova.network.neutron [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Updating instance_info_cache with network_info: [{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.818 226437 DEBUG oslo_concurrency.lockutils [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.876 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzsoys_h" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.900 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:24:24 compute-2 nova_compute[226433]: 2026-01-22 14:24:24.904 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:25 compute-2 ceph-mon[77081]: pgmap v1690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:25 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:25.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:26 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:26.190+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:27 compute-2 ceph-mon[77081]: pgmap v1691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:27 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:27.228+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:28 compute-2 podman[245914]: 2026-01-22 14:24:28.019662523 +0000 UTC m=+0.077964526 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 14:24:28 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:28 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:28.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:28.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:28.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:29 compute-2 nova_compute[226433]: 2026-01-22 14:24:29.031 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:29 compute-2 ceph-mon[77081]: pgmap v1692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:29 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:29.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:29 compute-2 nova_compute[226433]: 2026-01-22 14:24:29.691 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:30 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:30.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:30.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:30.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:31 compute-2 ceph-mon[77081]: pgmap v1693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 22 14:24:31 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:31.257+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:32 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:32.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.323760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872323795, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1249, "num_deletes": 251, "total_data_size": 2152088, "memory_usage": 2194728, "flush_reason": "Manual Compaction"}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872331901, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 922025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48245, "largest_seqno": 49488, "table_properties": {"data_size": 917757, "index_size": 1664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13046, "raw_average_key_size": 21, "raw_value_size": 907787, "raw_average_value_size": 1500, "num_data_blocks": 72, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091798, "oldest_key_time": 1769091798, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 8203 microseconds, and 3860 cpu microseconds.
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.331949) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 922025 bytes OK
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.331983) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333757) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333771) EVENT_LOG_v1 {"time_micros": 1769091872333767, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2145974, prev total WAL file size 2145974, number of live WAL files 2.
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.334553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(900KB)], [93(10MB)]
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872334605, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 12095662, "oldest_snapshot_seqno": -1}
Jan 22 14:24:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 9095 keys, 8667418 bytes, temperature: kUnknown
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872390193, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 8667418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8616931, "index_size": 26631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 244727, "raw_average_key_size": 26, "raw_value_size": 8461079, "raw_average_value_size": 930, "num_data_blocks": 1006, "num_entries": 9095, "num_filter_entries": 9095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.390951) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8667418 bytes
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.393024) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.7 rd, 154.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(22.5) write-amplify(9.4) OK, records in: 9579, records dropped: 484 output_compression: NoCompression
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.393080) EVENT_LOG_v1 {"time_micros": 1769091872393064, "job": 58, "event": "compaction_finished", "compaction_time_micros": 56083, "compaction_time_cpu_micros": 21524, "output_level": 6, "num_output_files": 1, "total_output_size": 8667418, "num_input_records": 9579, "num_output_records": 9095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872393466, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872395261, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.334456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:24:33 compute-2 ceph-mon[77081]: pgmap v1694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Jan 22 14:24:33 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:33 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:33.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:34 compute-2 nova_compute[226433]: 2026-01-22 14:24:34.034 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:34.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:34.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:34 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:34.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:34 compute-2 nova_compute[226433]: 2026-01-22 14:24:34.694 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:35.202+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:35 compute-2 ceph-mon[77081]: pgmap v1695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:24:35 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:35 compute-2 sudo[245945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:35 compute-2 sudo[245945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:35 compute-2 sudo[245945]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:36 compute-2 sudo[245970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:36 compute-2 sudo[245970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:36 compute-2 sudo[245970]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:36.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:24:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:24:36 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:36.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:37.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:37 compute-2 ceph-mon[77081]: pgmap v1696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:24:37 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:24:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:24:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:38.226+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:38 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:38 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:38 compute-2 ceph-mon[77081]: pgmap v1697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 14:24:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:38.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:39 compute-2 nova_compute[226433]: 2026-01-22 14:24:39.037 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:39.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:39 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:39 compute-2 nova_compute[226433]: 2026-01-22 14:24:39.696 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:24:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:24:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:40.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:40 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:40 compute-2 ceph-mon[77081]: pgmap v1698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 14:24:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:41.257+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:41 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:41 compute-2 sshd-session[245998]: Invalid user ubuntu from 92.118.39.95 port 33896
Jan 22 14:24:42 compute-2 sshd-session[245998]: Connection closed by invalid user ubuntu 92.118.39.95 port 33896 [preauth]
Jan 22 14:24:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:24:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:24:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:42.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:42 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:42 compute-2 ceph-mon[77081]: pgmap v1699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:43.292+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:43 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:43 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:44 compute-2 nova_compute[226433]: 2026-01-22 14:24:44.040 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:44.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:44.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 14:24:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 14:24:44 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:44 compute-2 ceph-mon[77081]: pgmap v1700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:44 compute-2 nova_compute[226433]: 2026-01-22 14:24:44.697 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:45 compute-2 podman[246002]: 2026-01-22 14:24:45.029183061 +0000 UTC m=+0.086648586 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 14:24:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:45.273+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:45 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:45 compute-2 nova_compute[226433]: 2026-01-22 14:24:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:46.312+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 14:24:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 14:24:46 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:46 compute-2 ceph-mon[77081]: pgmap v1701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:46 compute-2 nova_compute[226433]: 2026-01-22 14:24:46.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:24:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:24:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:24:47.199 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:47.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:47 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 14:24:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 14:24:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:48.287+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:48 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:48 compute-2 ceph-mon[77081]: pgmap v1702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:49 compute-2 nova_compute[226433]: 2026-01-22 14:24:49.043 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:49.273+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:49 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:49 compute-2 nova_compute[226433]: 2026-01-22 14:24:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:49 compute-2 nova_compute[226433]: 2026-01-22 14:24:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:49 compute-2 nova_compute[226433]: 2026-01-22 14:24:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:49 compute-2 nova_compute[226433]: 2026-01-22 14:24:49.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:24:49 compute-2 nova_compute[226433]: 2026-01-22 14:24:49.700 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:50.278+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:50.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:50 compute-2 sudo[246024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:50 compute-2 sudo[246024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:50 compute-2 sudo[246024]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:50 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:50 compute-2 ceph-mon[77081]: pgmap v1703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:50 compute-2 sudo[246049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:24:50 compute-2 sudo[246049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:50 compute-2 sudo[246049]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:50 compute-2 sudo[246074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:50 compute-2 sudo[246074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:50 compute-2 sudo[246074]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:50 compute-2 sudo[246099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:24:50 compute-2 sudo[246099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:51 compute-2 sudo[246099]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:51.247+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:51 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.813 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.813 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.814 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:24:51 compute-2 nova_compute[226433]: 2026-01-22 14:24:51.814 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:24:52 compute-2 nova_compute[226433]: 2026-01-22 14:24:52.099 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:24:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:52.203+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:52.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:52 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:52 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:52 compute-2 ceph-mon[77081]: pgmap v1704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:52 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:53 compute-2 nova_compute[226433]: 2026-01-22 14:24:53.014 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:24:53 compute-2 nova_compute[226433]: 2026-01-22 14:24:53.032 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:24:53 compute-2 nova_compute[226433]: 2026-01-22 14:24:53.032 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:24:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:53.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:53 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:24:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:24:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:24:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:24:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:24:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.049 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:54.218+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:54.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.552 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.553 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.553 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.554 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.554 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:54 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:54 compute-2 ceph-mon[77081]: pgmap v1705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.703 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:24:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3002855907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:54 compute-2 nova_compute[226433]: 2026-01-22 14:24:54.972 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.043 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.043 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.047 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.047 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.204 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.205 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4486MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.206 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.206 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:24:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:55.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.528 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:24:55 compute-2 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 14:24:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3002855907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:24:55 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/311209811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.941 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.947 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.968 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.996 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:24:55 compute-2 nova_compute[226433]: 2026-01-22 14:24:55.996 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:24:56 compute-2 sudo[246203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:56 compute-2 sudo[246203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:56 compute-2 sudo[246203]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:56 compute-2 sudo[246228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:56 compute-2 sudo[246228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:56 compute-2 sudo[246228]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:56.186+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:56.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:56 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:56 compute-2 ceph-mon[77081]: pgmap v1706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/311209811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:56 compute-2 nova_compute[226433]: 2026-01-22 14:24:56.997 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:56 compute-2 nova_compute[226433]: 2026-01-22 14:24:56.997 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:24:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:57.213+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:57 compute-2 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:24:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:58.176+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:58 compute-2 sudo[246254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:24:58 compute-2 sudo[246254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:58 compute-2 sudo[246254]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:24:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:24:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:58.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:24:58 compute-2 sudo[246280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:24:58 compute-2 sudo[246280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:24:58 compute-2 sudo[246280]: pam_unix(sudo:session): session closed for user root
Jan 22 14:24:58 compute-2 podman[246278]: 2026-01-22 14:24:58.442299886 +0000 UTC m=+0.110748027 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:24:58 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:58 compute-2 ceph-mon[77081]: pgmap v1707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:24:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:24:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:24:59 compute-2 nova_compute[226433]: 2026-01-22 14:24:59.052 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:24:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:59.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:24:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:24:59 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1660272869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:59 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1886356108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:24:59 compute-2 nova_compute[226433]: 2026-01-22 14:24:59.705 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:00.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:00 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:00 compute-2 ceph-mon[77081]: pgmap v1708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:01.242+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:02.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:02.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:02.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:02 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:02 compute-2 ceph-mon[77081]: pgmap v1709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:03.248+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:03 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:03 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:04 compute-2 nova_compute[226433]: 2026-01-22 14:25:04.057 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:04.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:25:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:25:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:04 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:04 compute-2 ceph-mon[77081]: pgmap v1710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:04 compute-2 nova_compute[226433]: 2026-01-22 14:25:04.706 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:05.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 14:25:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 14:25:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:06.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:06 compute-2 ceph-mon[77081]: pgmap v1711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:07.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:07 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:08.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:08.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:08 compute-2 ceph-mon[77081]: pgmap v1712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:09 compute-2 nova_compute[226433]: 2026-01-22 14:25:09.060 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:09.275+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:09 compute-2 nova_compute[226433]: 2026-01-22 14:25:09.708 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:10.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 14:25:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 14:25:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:10 compute-2 ceph-mon[77081]: pgmap v1713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:11.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:12.265+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:25:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:25:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:13 compute-2 ceph-mon[77081]: pgmap v1714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:13 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:13.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:14 compute-2 nova_compute[226433]: 2026-01-22 14:25:14.063 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:14.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:14.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:14 compute-2 nova_compute[226433]: 2026-01-22 14:25:14.711 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:15 compute-2 ceph-mon[77081]: pgmap v1715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:15.363+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:15 compute-2 podman[246340]: 2026-01-22 14:25:15.985925135 +0000 UTC m=+0.051514485 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 14:25:16 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:16 compute-2 sudo[246361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:16 compute-2 sudo[246361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:16 compute-2 sudo[246361]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:16.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:16 compute-2 sudo[246386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:16 compute-2 sudo[246386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:16 compute-2 sudo[246386]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:16.335+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:17 compute-2 ceph-mon[77081]: pgmap v1716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:17 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:17.343+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:18 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:18.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:18.385+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:19 compute-2 nova_compute[226433]: 2026-01-22 14:25:19.112 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:19 compute-2 ceph-mon[77081]: pgmap v1717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4266357046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:25:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4266357046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:25:19 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:19.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:19 compute-2 nova_compute[226433]: 2026-01-22 14:25:19.713 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:20 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:25:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:25:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:20.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 14:25:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 14:25:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:21.488+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:21 compute-2 ceph-mon[77081]: pgmap v1718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 14:25:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:22.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 14:25:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:22.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:22 compute-2 ceph-mon[77081]: pgmap v1719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:22.508+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:23 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:23.557+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:24 compute-2 nova_compute[226433]: 2026-01-22 14:25:24.115 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:24.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:24 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:24 compute-2 ceph-mon[77081]: pgmap v1720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:24.575+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:24 compute-2 nova_compute[226433]: 2026-01-22 14:25:24.751 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:25 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:25.574+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:26.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:26.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:26 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:26 compute-2 ceph-mon[77081]: pgmap v1721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:26.548+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:27 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:27.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:28.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:28.578+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:28 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:28 compute-2 ceph-mon[77081]: pgmap v1722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:25:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 9273 writes, 50K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 9273 writes, 9273 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1835 writes, 9473 keys, 1835 commit groups, 1.0 writes per commit group, ingest: 16.37 MB, 0.03 MB/s
                                           Interval WAL: 1835 writes, 1835 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.0      0.72              0.17        29    0.025       0      0       0.0       0.0
                                             L6      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6    134.2    113.1      2.18              0.69        28    0.078    199K    15K       0.0       0.0
                                            Sum      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6    100.8    103.6      2.90              0.87        57    0.051    199K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    102.5    100.0      0.75              0.20        14    0.053     64K   3548       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    134.2    113.1      2.18              0.69        28    0.078    199K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     75.4      0.72              0.17        28    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.053, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.10 MB/s write, 0.29 GB read, 0.10 MB/s read, 2.9 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 31.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000355 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1660,30.06 MB,9.88744%) FilterBlock(57,569.98 KB,0.1831%) IndexBlock(57,805.67 KB,0.258812%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:25:29 compute-2 podman[246418]: 2026-01-22 14:25:29.058127816 +0000 UTC m=+0.111665300 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:25:29 compute-2 nova_compute[226433]: 2026-01-22 14:25:29.117 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:29.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:29 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:29 compute-2 nova_compute[226433]: 2026-01-22 14:25:29.753 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:30.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:30.515+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:30 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:30 compute-2 ceph-mon[77081]: pgmap v1723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:31.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:31 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:32.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:32 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:32 compute-2 ceph-mon[77081]: pgmap v1724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:32 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:32 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:33.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:33 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:34 compute-2 nova_compute[226433]: 2026-01-22 14:25:34.148 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:25:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:25:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:25:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:25:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:34.475+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:34 compute-2 nova_compute[226433]: 2026-01-22 14:25:34.755 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:34 compute-2 ceph-mon[77081]: pgmap v1725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:34 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:35.446+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:35 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:36.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:36 compute-2 sudo[246446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:36 compute-2 sudo[246446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:36 compute-2 sudo[246446]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:36.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:36.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:36 compute-2 sudo[246471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:36 compute-2 sudo[246471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:36 compute-2 sudo[246471]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:36 compute-2 ceph-mon[77081]: pgmap v1726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:36 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:37.495+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:37 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:37 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:38.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:38.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:38.508+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:38 compute-2 ceph-mon[77081]: pgmap v1727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:38 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:39 compute-2 nova_compute[226433]: 2026-01-22 14:25:39.152 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:39.517+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:39 compute-2 nova_compute[226433]: 2026-01-22 14:25:39.800 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:39 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:40.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:40.538+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:40 compute-2 ceph-mon[77081]: pgmap v1728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:40 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:41.544+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:41 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:42.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:42.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:42.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:43 compute-2 ceph-mon[77081]: pgmap v1729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:43 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:43 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:43.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:44 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:44 compute-2 nova_compute[226433]: 2026-01-22 14:25:44.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:25:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:25:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:44.575+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:44 compute-2 nova_compute[226433]: 2026-01-22 14:25:44.802 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:45 compute-2 ceph-mon[77081]: pgmap v1730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:45 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:45.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:46 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:46.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:46.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:46 compute-2 nova_compute[226433]: 2026-01-22 14:25:46.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:46.562+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:47 compute-2 podman[246502]: 2026-01-22 14:25:47.00408548 +0000 UTC m=+0.067589801 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:25:47 compute-2 ceph-mon[77081]: pgmap v1731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:47 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:25:47.199 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:25:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:25:47.199 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:25:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:25:47.200 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:25:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:47.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:48 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:48 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:48.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:48.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:48 compute-2 nova_compute[226433]: 2026-01-22 14:25:48.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:48.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:49 compute-2 nova_compute[226433]: 2026-01-22 14:25:49.225 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:49 compute-2 ceph-mon[77081]: pgmap v1732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:49 compute-2 nova_compute[226433]: 2026-01-22 14:25:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:49.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:49 compute-2 nova_compute[226433]: 2026-01-22 14:25:49.804 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:50 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:25:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:50.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:25:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:50 compute-2 nova_compute[226433]: 2026-01-22 14:25:50.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:50.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:51 compute-2 nova_compute[226433]: 2026-01-22 14:25:51.156 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:51 compute-2 ceph-mon[77081]: pgmap v1733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:51 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:51.575+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:51 compute-2 nova_compute[226433]: 2026-01-22 14:25:51.590 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:51 compute-2 nova_compute[226433]: 2026-01-22 14:25:51.591 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:25:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:52 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:52 compute-2 ceph-mon[77081]: pgmap v1734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:52 compute-2 nova_compute[226433]: 2026-01-22 14:25:52.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:52 compute-2 nova_compute[226433]: 2026-01-22 14:25:52.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:25:52 compute-2 nova_compute[226433]: 2026-01-22 14:25:52.534 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:25:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:52.543+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:53 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.534 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.535 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.535 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.567 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.567 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.568 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.568 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.568 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:25:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:53.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.821 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.822 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.822 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:25:53 compute-2 nova_compute[226433]: 2026-01-22 14:25:53.822 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:25:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.076 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.228 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:54 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:54 compute-2 ceph-mon[77081]: pgmap v1735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:25:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:54.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.515 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.547 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.548 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:54.558+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.595 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.595 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.596 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.596 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.597 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:25:54 compute-2 nova_compute[226433]: 2026-01-22 14:25:54.806 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:25:55 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2696841303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.093 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.211 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.211 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.218 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.218 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.375 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.376 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4486MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.376 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.376 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:25:55 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2696841303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:55.517+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.874 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.876 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.876 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:25:55 compute-2 nova_compute[226433]: 2026-01-22 14:25:55.876 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:25:56 compute-2 nova_compute[226433]: 2026-01-22 14:25:56.260 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:25:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:56.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:56 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:56 compute-2 ceph-mon[77081]: pgmap v1736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:56.515+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:56 compute-2 sudo[246567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:56 compute-2 sudo[246567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:56 compute-2 sudo[246567]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:56 compute-2 sudo[246592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:56 compute-2 sudo[246592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:56 compute-2 sudo[246592]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:25:56 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1661484102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:56 compute-2 nova_compute[226433]: 2026-01-22 14:25:56.727 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:25:56 compute-2 nova_compute[226433]: 2026-01-22 14:25:56.733 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:25:56 compute-2 nova_compute[226433]: 2026-01-22 14:25:56.773 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:25:56 compute-2 nova_compute[226433]: 2026-01-22 14:25:56.776 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:25:56 compute-2 nova_compute[226433]: 2026-01-22 14:25:56.776 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:25:57 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:25:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1661484102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:57.466+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:57 compute-2 nova_compute[226433]: 2026-01-22 14:25:57.744 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:57 compute-2 nova_compute[226433]: 2026-01-22 14:25:57.745 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:25:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:25:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:25:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:25:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:25:58 compute-2 sudo[246620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:58 compute-2 sudo[246620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-2 sudo[246620]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:58.499+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:58 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:58 compute-2 ceph-mon[77081]: pgmap v1737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:25:58 compute-2 sudo[246645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:25:58 compute-2 sudo[246645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-2 sudo[246645]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:58 compute-2 sudo[246670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:25:58 compute-2 sudo[246670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-2 sudo[246670]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:58 compute-2 sudo[246695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:25:58 compute-2 sudo[246695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:25:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:25:59 compute-2 sudo[246695]: pam_unix(sudo:session): session closed for user root
Jan 22 14:25:59 compute-2 nova_compute[226433]: 2026-01-22 14:25:59.233 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:59 compute-2 sshd-session[246738]: Invalid user eth from 45.148.10.240 port 57268
Jan 22 14:25:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:59.462+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:25:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:59 compute-2 nova_compute[226433]: 2026-01-22 14:25:59.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:25:59 compute-2 sshd-session[246738]: Connection closed by invalid user eth 45.148.10.240 port 57268 [preauth]
Jan 22 14:25:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:25:59 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3199945618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:25:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:25:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:25:59 compute-2 podman[246754]: 2026-01-22 14:25:59.619268153 +0000 UTC m=+0.151192443 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 14:25:59 compute-2 nova_compute[226433]: 2026-01-22 14:25:59.807 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.830400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959830437, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1441, "num_deletes": 251, "total_data_size": 2522195, "memory_usage": 2571048, "flush_reason": "Manual Compaction"}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959841944, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 1644723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49493, "largest_seqno": 50929, "table_properties": {"data_size": 1639072, "index_size": 2791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14729, "raw_average_key_size": 20, "raw_value_size": 1626619, "raw_average_value_size": 2303, "num_data_blocks": 120, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091873, "oldest_key_time": 1769091873, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 11592 microseconds, and 4290 cpu microseconds.
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.841988) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 1644723 bytes OK
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.842008) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.843996) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844041) EVENT_LOG_v1 {"time_micros": 1769091959844032, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844063) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2515301, prev total WAL file size 2515301, number of live WAL files 2.
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.845081) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(1606KB)], [96(8464KB)]
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959845211, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 10312141, "oldest_snapshot_seqno": -1}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 9284 keys, 8614036 bytes, temperature: kUnknown
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959897610, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 8614036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8562638, "index_size": 27094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 249886, "raw_average_key_size": 26, "raw_value_size": 8403673, "raw_average_value_size": 905, "num_data_blocks": 1020, "num_entries": 9284, "num_filter_entries": 9284, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.897830) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8614036 bytes
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.899361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 164.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.5) write-amplify(5.2) OK, records in: 9801, records dropped: 517 output_compression: NoCompression
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.899377) EVENT_LOG_v1 {"time_micros": 1769091959899369, "job": 60, "event": "compaction_finished", "compaction_time_micros": 52444, "compaction_time_cpu_micros": 25488, "output_level": 6, "num_output_files": 1, "total_output_size": 8614036, "num_input_records": 9801, "num_output_records": 9284, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959899871, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959901194, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:25:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:26:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:00.417+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:00 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:26:00 compute-2 ceph-mon[77081]: pgmap v1738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:00 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/226630284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:01.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:02.346+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:02 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:02 compute-2 ceph-mon[77081]: pgmap v1739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:03.315+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:03 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:03 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:04 compute-2 nova_compute[226433]: 2026-01-22 14:26:04.284 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:04.321+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:04 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:04 compute-2 ceph-mon[77081]: pgmap v1740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:04 compute-2 nova_compute[226433]: 2026-01-22 14:26:04.809 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:05.334+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:05 compute-2 sudo[246783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:05 compute-2 sudo[246783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:05 compute-2 sudo[246783]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:05 compute-2 sudo[246808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:26:05 compute-2 sudo[246808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:05 compute-2 sudo[246808]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:26:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:26:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:06.337+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:06 compute-2 nova_compute[226433]: 2026-01-22 14:26:06.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:06 compute-2 ceph-mon[77081]: pgmap v1741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:07.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:07 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:08.295+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:08 compute-2 ceph-mon[77081]: pgmap v1742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:09 compute-2 nova_compute[226433]: 2026-01-22 14:26:09.286 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:09.337+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:09 compute-2 nova_compute[226433]: 2026-01-22 14:26:09.811 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:10.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:10.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:10 compute-2 ceph-mon[77081]: pgmap v1743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:11.369+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:11 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:12.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:12 compute-2 ceph-mon[77081]: pgmap v1744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:12 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:13.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:14 compute-2 nova_compute[226433]: 2026-01-22 14:26:14.289 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:14.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:14.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:14 compute-2 nova_compute[226433]: 2026-01-22 14:26:14.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:14 compute-2 nova_compute[226433]: 2026-01-22 14:26:14.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:26:14 compute-2 nova_compute[226433]: 2026-01-22 14:26:14.813 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:14 compute-2 ceph-mon[77081]: pgmap v1745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:15.338+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:16.351+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:16.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:16 compute-2 sudo[246838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:16 compute-2 sudo[246838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:16 compute-2 sudo[246838]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:16 compute-2 sudo[246864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:16 compute-2 sudo[246864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:16 compute-2 sudo[246864]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:16 compute-2 ceph-mon[77081]: pgmap v1746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:16 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:17.377+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:17 compute-2 podman[246889]: 2026-01-22 14:26:17.993715811 +0000 UTC m=+0.054314774 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 14:26:18 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:26:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:26:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:26:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:26:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:18.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:19 compute-2 ceph-mon[77081]: pgmap v1747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:26:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:26:19 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:19 compute-2 nova_compute[226433]: 2026-01-22 14:26:19.293 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:19.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:19 compute-2 nova_compute[226433]: 2026-01-22 14:26:19.815 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:20 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:20.333+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:20.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:21.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:21 compute-2 ceph-mon[77081]: pgmap v1748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:21 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:22.325+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:22.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.955 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 WARNING nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] While synchronizing instance power states, found 6 instances in the database and 2 instances on the hypervisor.
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for e0e74330-96df-479f-8baf-53fbd2ccba91 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for f591d61b-712e-49aa-85bd-8d222b607eb3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for 87e798e6-6f00-4fe1-8412-75ddc9e2878e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid 8331b067-1b3f-4a1d-a596-e966f6de776a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid a0b3924b-4422-47c5-ba40-748e41b14d00 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.989 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "8331b067-1b3f-4a1d-a596-e966f6de776a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:22 compute-2 nova_compute[226433]: 2026-01-22 14:26:22.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:23 compute-2 nova_compute[226433]: 2026-01-22 14:26:23.019 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:26:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:23.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:23 compute-2 ceph-mon[77081]: pgmap v1749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:24 compute-2 nova_compute[226433]: 2026-01-22 14:26:24.297 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:24.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:24 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:24 compute-2 ceph-mon[77081]: pgmap v1750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:24.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:24 compute-2 nova_compute[226433]: 2026-01-22 14:26:24.818 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:25.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:25 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:26.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:26 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:26 compute-2 ceph-mon[77081]: pgmap v1751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:27.301+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:27 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:27 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:28.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:29 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:29 compute-2 ceph-mon[77081]: pgmap v1752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:29.289+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:29 compute-2 nova_compute[226433]: 2026-01-22 14:26:29.301 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:29 compute-2 nova_compute[226433]: 2026-01-22 14:26:29.866 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:30 compute-2 podman[246914]: 2026-01-22 14:26:30.048001528 +0000 UTC m=+0.096348476 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:26:30 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:30 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:30.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:31 compute-2 ceph-mon[77081]: pgmap v1753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:31 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:26:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Cumulative writes: 7904 writes, 30K keys, 7904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7904 writes, 1924 syncs, 4.11 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 927 writes, 2915 keys, 927 commit groups, 1.0 writes per commit group, ingest: 2.54 MB, 0.00 MB/s
                                           Interval WAL: 927 writes, 373 syncs, 2.49 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:26:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:31.245+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:32 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:32.229+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:33 compute-2 ceph-mon[77081]: pgmap v1754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:33 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:33 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:33.213+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:34 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:34.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:34 compute-2 nova_compute[226433]: 2026-01-22 14:26:34.303 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000055s ======
Jan 22 14:26:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:34.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Jan 22 14:26:34 compute-2 nova_compute[226433]: 2026-01-22 14:26:34.868 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:35.212+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:35 compute-2 ceph-mon[77081]: pgmap v1755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:35 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:36.199+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:36 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:26:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:26:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:36 compute-2 sudo[246945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:36 compute-2 sudo[246945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:36 compute-2 sudo[246945]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:36 compute-2 sudo[246970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:36 compute-2 sudo[246970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:36 compute-2 sudo[246970]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:37.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:37 compute-2 ceph-mon[77081]: pgmap v1756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:37 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:38.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:38 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:38 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:39.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:39 compute-2 ceph-mon[77081]: pgmap v1757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:39 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:39 compute-2 nova_compute[226433]: 2026-01-22 14:26:39.305 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:39 compute-2 nova_compute[226433]: 2026-01-22 14:26:39.870 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:40.307+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:40 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:40.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:41.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:41 compute-2 ceph-mon[77081]: pgmap v1758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:41 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:42.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:42 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:43 compute-2 ceph-mon[77081]: pgmap v1759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:43 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:43 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:43.329+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:44 compute-2 nova_compute[226433]: 2026-01-22 14:26:44.309 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:44 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:44 compute-2 ceph-mon[77081]: pgmap v1760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:44.372+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:44 compute-2 nova_compute[226433]: 2026-01-22 14:26:44.873 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:45.349+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:45 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:46.377+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:46 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:46 compute-2 ceph-mon[77081]: pgmap v1761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:46.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:26:47.200 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:26:47.201 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:26:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:26:47.201 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:26:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:47.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:47 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:47 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:48.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:48 compute-2 nova_compute[226433]: 2026-01-22 14:26:48.551 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:48 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:48 compute-2 ceph-mon[77081]: pgmap v1762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:48 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:48 compute-2 podman[247001]: 2026-01-22 14:26:48.98620666 +0000 UTC m=+0.050931778 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 22 14:26:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:49.494+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:49 compute-2 nova_compute[226433]: 2026-01-22 14:26:49.497 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:49 compute-2 nova_compute[226433]: 2026-01-22 14:26:49.874 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:50.453+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:50 compute-2 nova_compute[226433]: 2026-01-22 14:26:50.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:50 compute-2 nova_compute[226433]: 2026-01-22 14:26:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:50 compute-2 nova_compute[226433]: 2026-01-22 14:26:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:51 compute-2 ceph-mon[77081]: pgmap v1763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:51 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:51.419+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:52 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:52.387+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:52.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:53 compute-2 ceph-mon[77081]: pgmap v1764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:53 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:53 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:53.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:53 compute-2 nova_compute[226433]: 2026-01-22 14:26:53.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:53 compute-2 nova_compute[226433]: 2026-01-22 14:26:53.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:26:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:54 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:54 compute-2 sshd-session[247022]: Invalid user ubuntu from 92.118.39.95 port 41124
Jan 22 14:26:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:54.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:54.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:54 compute-2 nova_compute[226433]: 2026-01-22 14:26:54.500 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:54.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:54 compute-2 sshd-session[247022]: Connection closed by invalid user ubuntu 92.118.39.95 port 41124 [preauth]
Jan 22 14:26:54 compute-2 nova_compute[226433]: 2026-01-22 14:26:54.876 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:55 compute-2 ceph-mon[77081]: pgmap v1765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:55 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:55.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.537 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.537 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.538 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.538 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.538 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.958 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.959 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.959 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:26:55 compute-2 nova_compute[226433]: 2026-01-22 14:26:55.959 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.147 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:26:56 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:56 compute-2 ceph-mon[77081]: pgmap v1766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:56.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.571 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.587 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.587 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.587 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.588 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.613 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.613 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.613 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.614 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:26:56 compute-2 nova_compute[226433]: 2026-01-22 14:26:56.614 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:26:56 compute-2 sudo[247046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:56 compute-2 sudo[247046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:56 compute-2 sudo[247046]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:57 compute-2 sudo[247071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:26:57 compute-2 sudo[247071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:26:57 compute-2 sudo[247071]: pam_unix(sudo:session): session closed for user root
Jan 22 14:26:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:26:57 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/259508654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.091 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.171 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.172 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.175 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.176 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.299 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.300 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4455MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.300 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.300 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:26:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/259508654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:57 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.389 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.391 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.410 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.425 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.425 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.442 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.463 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:26:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:57.468+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:57 compute-2 nova_compute[226433]: 2026-01-22 14:26:57.866 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:26:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:26:58 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/815054871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:58 compute-2 nova_compute[226433]: 2026-01-22 14:26:58.282 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:26:58 compute-2 nova_compute[226433]: 2026-01-22 14:26:58.287 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:26:58 compute-2 nova_compute[226433]: 2026-01-22 14:26:58.305 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:26:58 compute-2 nova_compute[226433]: 2026-01-22 14:26:58.307 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:26:58 compute-2 nova_compute[226433]: 2026-01-22 14:26:58.307 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:26:58 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:58 compute-2 ceph-mon[77081]: pgmap v1767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:26:58 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/815054871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:26:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:26:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:58.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:26:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:58.473+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:26:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:26:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:26:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:26:59 compute-2 nova_compute[226433]: 2026-01-22 14:26:59.236 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:26:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:26:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:59.458+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:26:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:26:59 compute-2 nova_compute[226433]: 2026-01-22 14:26:59.548 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:26:59 compute-2 nova_compute[226433]: 2026-01-22 14:26:59.878 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:00 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 14:27:00 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1362540640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:00 compute-2 ceph-mon[77081]: pgmap v1768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 14:27:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:00.454+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:00.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:01 compute-2 podman[247122]: 2026-01-22 14:27:01.005224629 +0000 UTC m=+0.071438536 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 14:27:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:01.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:01 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3394215233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:02.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:02.420+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:02 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:02 compute-2 ceph-mon[77081]: pgmap v1769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 14:27:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:03.465+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:03 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:03 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 3012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:04.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:04.434+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:04 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:04 compute-2 ceph-mon[77081]: pgmap v1770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 104 op/s
Jan 22 14:27:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:04 compute-2 nova_compute[226433]: 2026-01-22 14:27:04.550 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:04 compute-2 nova_compute[226433]: 2026-01-22 14:27:04.879 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:05.479+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:05 compute-2 sudo[247150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:05 compute-2 sudo[247150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:05 compute-2 sudo[247150]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:05 compute-2 sudo[247175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:27:05 compute-2 sudo[247175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:05 compute-2 sudo[247175]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:05 compute-2 sudo[247200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:05 compute-2 sudo[247200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:05 compute-2 sudo[247200]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:05 compute-2 sudo[247225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:27:05 compute-2 sudo[247225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:06 compute-2 sudo[247225]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:06.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:06.482+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:06 compute-2 ceph-mon[77081]: pgmap v1771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 14:27:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.371522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027371595, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1165, "num_deletes": 256, "total_data_size": 1971498, "memory_usage": 1997016, "flush_reason": "Manual Compaction"}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027382964, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 1294814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50934, "largest_seqno": 52094, "table_properties": {"data_size": 1289998, "index_size": 2212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12371, "raw_average_key_size": 20, "raw_value_size": 1279428, "raw_average_value_size": 2100, "num_data_blocks": 95, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091960, "oldest_key_time": 1769091960, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 11485 microseconds, and 4058 cpu microseconds.
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383015) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 1294814 bytes OK
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383034) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385171) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385188) EVENT_LOG_v1 {"time_micros": 1769092027385183, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385205) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1965703, prev total WAL file size 1965703, number of live WAL files 2.
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323631' seq:0, type:0; will stop at (end)
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(1264KB)], [99(8412KB)]
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027385991, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 9908850, "oldest_snapshot_seqno": -1}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 9366 keys, 9739638 bytes, temperature: kUnknown
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027444567, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 9739638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9686583, "index_size": 28559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 252970, "raw_average_key_size": 27, "raw_value_size": 9524930, "raw_average_value_size": 1016, "num_data_blocks": 1078, "num_entries": 9366, "num_filter_entries": 9366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.444826) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9739638 bytes
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.448817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 166.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 9893, records dropped: 527 output_compression: NoCompression
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.448838) EVENT_LOG_v1 {"time_micros": 1769092027448828, "job": 62, "event": "compaction_finished", "compaction_time_micros": 58657, "compaction_time_cpu_micros": 23777, "output_level": 6, "num_output_files": 1, "total_output_size": 9739638, "num_input_records": 9893, "num_output_records": 9366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027449178, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027450998, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:27:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:07.475+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:07 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:27:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:27:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:27:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:27:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:27:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:08.447+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:08.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:08 compute-2 ceph-mon[77081]: pgmap v1772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 14:27:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:09.446+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:09 compute-2 nova_compute[226433]: 2026-01-22 14:27:09.554 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:09 compute-2 nova_compute[226433]: 2026-01-22 14:27:09.881 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:10.400+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:10 compute-2 ceph-mon[77081]: pgmap v1773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 162 op/s
Jan 22 14:27:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:11.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:11 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:12.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:27:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:27:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:12 compute-2 ceph-mon[77081]: pgmap v1774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 0 B/s wr, 155 op/s
Jan 22 14:27:12 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:13.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:13 compute-2 sudo[247285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:13 compute-2 sudo[247285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:13 compute-2 sudo[247285]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:13 compute-2 sudo[247310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:27:13 compute-2 sudo[247310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:13 compute-2 sudo[247310]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:27:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:14.312+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:14.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:14.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:14 compute-2 nova_compute[226433]: 2026-01-22 14:27:14.598 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:14 compute-2 nova_compute[226433]: 2026-01-22 14:27:14.883 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:15 compute-2 ceph-mon[77081]: pgmap v1775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 14:27:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:15.330+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:16.367+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:16.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:16 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:17 compute-2 sudo[247337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:17 compute-2 sudo[247337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:17 compute-2 sudo[247337]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:17 compute-2 sudo[247362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:17 compute-2 sudo[247362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:17 compute-2 sudo[247362]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:17.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:17 compute-2 ceph-mon[77081]: pgmap v1776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 22 14:27:17 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:17 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:18.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:18 compute-2 ceph-mon[77081]: pgmap v1777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:27:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/893010323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:27:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:19.349+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:19 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:19 compute-2 nova_compute[226433]: 2026-01-22 14:27:19.600 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:19 compute-2 nova_compute[226433]: 2026-01-22 14:27:19.885 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:20 compute-2 podman[247388]: 2026-01-22 14:27:20.012102613 +0000 UTC m=+0.061269429 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 14:27:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:20.365+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:27:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:27:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:20 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:20 compute-2 ceph-mon[77081]: pgmap v1778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:21.338+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:22.346+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:22.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:23 compute-2 ceph-mon[77081]: pgmap v1779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:23 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:23.323+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:24 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:24.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:24.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:24 compute-2 nova_compute[226433]: 2026-01-22 14:27:24.604 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:24 compute-2 nova_compute[226433]: 2026-01-22 14:27:24.887 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:25 compute-2 ceph-mon[77081]: pgmap v1780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:27:25 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:25.325+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:25 compute-2 nova_compute[226433]: 2026-01-22 14:27:25.473 226437 DEBUG oslo_concurrency.lockutils [None req-ba0e4a49-0b53-46e9-80a4-11bd4e6c0b83 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "8331b067-1b3f-4a1d-a596-e966f6de776a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:27:26 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:26.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:26.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:27 compute-2 ceph-mon[77081]: pgmap v1781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:27 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:27.393+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:28 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:28 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:28.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:28.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:29 compute-2 ceph-mon[77081]: pgmap v1782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:29 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:29.459+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:29 compute-2 nova_compute[226433]: 2026-01-22 14:27:29.606 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:29 compute-2 nova_compute[226433]: 2026-01-22 14:27:29.890 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:30 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:30 compute-2 ceph-mon[77081]: pgmap v1783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:30.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:30.496+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:31 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:31.508+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:32 compute-2 podman[247413]: 2026-01-22 14:27:32.022149455 +0000 UTC m=+0.086937168 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:27:32 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:32 compute-2 ceph-mon[77081]: pgmap v1784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:32.483+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:32.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:33.533+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:33 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:33 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:34.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:34.576+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:34 compute-2 nova_compute[226433]: 2026-01-22 14:27:34.610 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:34 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:34 compute-2 ceph-mon[77081]: pgmap v1785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:34 compute-2 nova_compute[226433]: 2026-01-22 14:27:34.893 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:35.588+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:35 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:36.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:36.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:36.598+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:36 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:36 compute-2 ceph-mon[77081]: pgmap v1786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 14:27:36 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:37 compute-2 sudo[247443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:37 compute-2 sudo[247443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:37 compute-2 sudo[247443]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:37 compute-2 sudo[247468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:37 compute-2 sudo[247468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:37 compute-2 sudo[247468]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:37.626+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:37 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:38.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:27:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:27:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:38.649+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:38 compute-2 ceph-mon[77081]: pgmap v1787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:38 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:39.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:39 compute-2 nova_compute[226433]: 2026-01-22 14:27:39.662 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:39 compute-2 nova_compute[226433]: 2026-01-22 14:27:39.894 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:40 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:40.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:40.689+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:41 compute-2 ceph-mon[77081]: pgmap v1788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:41 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:41.658+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:42 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:42.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:42.709+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:43 compute-2 ceph-mon[77081]: pgmap v1789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:43 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:43 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:43.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:44 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:44 compute-2 ceph-mon[77081]: pgmap v1790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:44 compute-2 nova_compute[226433]: 2026-01-22 14:27:44.666 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:44.758+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:44 compute-2 nova_compute[226433]: 2026-01-22 14:27:44.896 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:45.752+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:45 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:46.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:46.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:46.741+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:46 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:46 compute-2 ceph-mon[77081]: pgmap v1791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 6 op/s
Jan 22 14:27:46 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:27:47.202 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:27:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:27:47.202 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:27:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:27:47.202 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:27:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:47.703+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:48 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:48 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:48 compute-2 nova_compute[226433]: 2026-01-22 14:27:48.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:48.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:48.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:49 compute-2 ceph-mon[77081]: pgmap v1792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:49 compute-2 nova_compute[226433]: 2026-01-22 14:27:49.671 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:49.775+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:27:49 compute-2 nova_compute[226433]: 2026-01-22 14:27:49.939 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:50 compute-2 nova_compute[226433]: 2026-01-22 14:27:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:50.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:51 compute-2 podman[247501]: 2026-01-22 14:27:51.048543191 +0000 UTC m=+0.093332012 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 22 14:27:51 compute-2 nova_compute[226433]: 2026-01-22 14:27:51.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:51 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:27:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:51.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:52 compute-2 nova_compute[226433]: 2026-01-22 14:27:52.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:52.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:52.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:52 compute-2 ceph-mon[77081]: pgmap v1793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 14:27:52 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:52 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:52 compute-2 ceph-mon[77081]: pgmap v1794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 529 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 6.9 KiB/s rd, 341 B/s wr, 10 op/s
Jan 22 14:27:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:53.774+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:54 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:54 compute-2 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 3063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:27:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:54 compute-2 nova_compute[226433]: 2026-01-22 14:27:54.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:54 compute-2 nova_compute[226433]: 2026-01-22 14:27:54.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:27:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:54.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:54 compute-2 nova_compute[226433]: 2026-01-22 14:27:54.675 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:54.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:54 compute-2 nova_compute[226433]: 2026-01-22 14:27:54.939 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:55 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:55 compute-2 ceph-mon[77081]: pgmap v1795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 529 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 6.8 KiB/s rd, 341 B/s wr, 9 op/s
Jan 22 14:27:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2715672213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:55 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:27:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:55.800+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:27:55 compute-2 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:27:56 compute-2 nova_compute[226433]: 2026-01-22 14:27:56.248 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:27:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:56 compute-2 nova_compute[226433]: 2026-01-22 14:27:56.700 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:27:56 compute-2 nova_compute[226433]: 2026-01-22 14:27:56.714 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:27:56 compute-2 nova_compute[226433]: 2026-01-22 14:27:56.715 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:27:56 compute-2 nova_compute[226433]: 2026-01-22 14:27:56.716 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:56 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:56 compute-2 ceph-mon[77081]: pgmap v1796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:27:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:56.827+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:57 compute-2 sudo[247525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:57 compute-2 sudo[247525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:57 compute-2 sudo[247525]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:57 compute-2 sudo[247550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:27:57 compute-2 sudo[247550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:27:57 compute-2 sudo[247550]: pam_unix(sudo:session): session closed for user root
Jan 22 14:27:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:57.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:27:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.536 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.537 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.538 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:27:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:27:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:27:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:27:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:58.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:27:58 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3252584016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:27:58 compute-2 nova_compute[226433]: 2026-01-22 14:27:58.965 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.038 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.038 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.041 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.041 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.216 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.217 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4472MB free_disk=20.77179718017578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.217 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.218 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:27:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:59 compute-2 ceph-mon[77081]: pgmap v1797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.420 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.678 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:27:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:59.819+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:27:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:27:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:27:59 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433983523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.841 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.847 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.863 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.889 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.889 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:27:59 compute-2 nova_compute[226433]: 2026-01-22 14:27:59.940 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:00 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:00 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3252584016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:00 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:00 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/433983523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:00.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:00.816+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:00 compute-2 nova_compute[226433]: 2026-01-22 14:28:00.891 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:01 compute-2 ceph-mon[77081]: pgmap v1798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:28:01 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/397044635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:01.808+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:02 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:02 compute-2 ceph-mon[77081]: pgmap v1799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 22 14:28:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/668164319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:02 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:02.782+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:03 compute-2 podman[247622]: 2026-01-22 14:28:03.037753482 +0000 UTC m=+0.094646028 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Jan 22 14:28:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:03.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:04 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:04.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:28:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:28:04 compute-2 nova_compute[226433]: 2026-01-22 14:28:04.681 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:04.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:04 compute-2 nova_compute[226433]: 2026-01-22 14:28:04.942 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:05 compute-2 ceph-mon[77081]: pgmap v1800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 852 B/s wr, 19 op/s
Jan 22 14:28:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:05.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:06 compute-2 ceph-mon[77081]: pgmap v1801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 852 B/s wr, 19 op/s
Jan 22 14:28:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:06.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:07.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:08 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:08.811+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:09 compute-2 ceph-mon[77081]: pgmap v1802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 22 14:28:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:09 compute-2 nova_compute[226433]: 2026-01-22 14:28:09.685 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:09.824+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:09 compute-2 nova_compute[226433]: 2026-01-22 14:28:09.944 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:10.844+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:11 compute-2 nova_compute[226433]: 2026-01-22 14:28:11.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:11.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:11 compute-2 ceph-mon[77081]: pgmap v1803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 683 KiB/s rd, 1 op/s
Jan 22 14:28:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:12.911+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:13 compute-2 ceph-mon[77081]: pgmap v1804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:28:13 compute-2 sudo[247654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:13 compute-2 sudo[247654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-2 sudo[247654]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:13 compute-2 sudo[247679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:28:13 compute-2 sudo[247679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-2 sudo[247679]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:13 compute-2 sudo[247704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:13 compute-2 sudo[247704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-2 sudo[247704]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:13 compute-2 sudo[247729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:28:13 compute-2 sudo[247729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:13.862+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:13 compute-2 sudo[247729]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:14 compute-2 sudo[247775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:14 compute-2 sudo[247775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:14 compute-2 sudo[247775]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:14 compute-2 sudo[247800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:28:14 compute-2 sudo[247800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:14 compute-2 sudo[247800]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:14 compute-2 sudo[247825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:14 compute-2 sudo[247825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:14 compute-2 sudo[247825]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:14 compute-2 sudo[247850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:28:14 compute-2 sudo[247850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:14 compute-2 nova_compute[226433]: 2026-01-22 14:28:14.689 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:14.823+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:14 compute-2 podman[247949]: 2026-01-22 14:28:14.903425958 +0000 UTC m=+0.082725563 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:28:14 compute-2 nova_compute[226433]: 2026-01-22 14:28:14.948 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:15 compute-2 podman[247949]: 2026-01-22 14:28:15.003953245 +0000 UTC m=+0.183252840 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 22 14:28:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:15 compute-2 ceph-mon[77081]: pgmap v1805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:28:15 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:15 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:15 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:28:15 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:28:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:15 compute-2 podman[248103]: 2026-01-22 14:28:15.663105774 +0000 UTC m=+0.060041496 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:28:15 compute-2 podman[248103]: 2026-01-22 14:28:15.669999192 +0000 UTC m=+0.066934894 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:28:15 compute-2 sshd-session[248043]: Invalid user solv from 45.148.10.240 port 52462
Jan 22 14:28:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:15.844+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:15 compute-2 podman[248170]: 2026-01-22 14:28:15.860619522 +0000 UTC m=+0.050517736 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public)
Jan 22 14:28:15 compute-2 podman[248170]: 2026-01-22 14:28:15.874719756 +0000 UTC m=+0.064617970 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 22 14:28:15 compute-2 sudo[247850]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:15 compute-2 sshd-session[248043]: Connection closed by invalid user solv 45.148.10.240 port 52462 [preauth]
Jan 22 14:28:16 compute-2 sudo[248204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:16 compute-2 sudo[248204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:16 compute-2 sudo[248204]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:16 compute-2 sudo[248229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:28:16 compute-2 sudo[248229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:16 compute-2 sudo[248229]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:16 compute-2 sudo[248254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:16 compute-2 sudo[248254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:16 compute-2 sudo[248254]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:16 compute-2 sudo[248279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:28:16 compute-2 sudo[248279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:16 compute-2 sudo[248279]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:16.877+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:16 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:16 compute-2 ceph-mon[77081]: pgmap v1806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 546 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 39 op/s
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:28:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:28:17 compute-2 sudo[248336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:17 compute-2 sudo[248336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:17 compute-2 sudo[248336]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:17 compute-2 sudo[248361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:17 compute-2 sudo[248361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:17 compute-2 sudo[248361]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:17.830+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:18 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:18.877+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:19 compute-2 ceph-mon[77081]: pgmap v1807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 546 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 22 14:28:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:28:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3920902371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:28:19 compute-2 nova_compute[226433]: 2026-01-22 14:28:19.691 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:19.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:19 compute-2 nova_compute[226433]: 2026-01-22 14:28:19.996 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:20.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:20 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:20 compute-2 ceph-mon[77081]: pgmap v1808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 22 14:28:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:21.910+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:22 compute-2 podman[248388]: 2026-01-22 14:28:22.025191387 +0000 UTC m=+0.073468791 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 14:28:22 compute-2 nova_compute[226433]: 2026-01-22 14:28:22.088 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:22 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:22.088 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:28:22 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:22.089 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:28:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:22 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2632739966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:22.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:22.883+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:23 compute-2 ceph-mon[77081]: pgmap v1809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 14:28:23 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:23.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:24 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:24 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4267242123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:24 compute-2 nova_compute[226433]: 2026-01-22 14:28:24.695 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:24.913+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:25 compute-2 nova_compute[226433]: 2026-01-22 14:28:24.999 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:25 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:25 compute-2 ceph-mon[77081]: pgmap v1810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 14:28:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:28:25 compute-2 sudo[248409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:25 compute-2 sudo[248409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:25 compute-2 sudo[248409]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:25.920+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:25 compute-2 sudo[248434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:28:25 compute-2 sudo[248434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:25 compute-2 sudo[248434]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:26 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:26 compute-2 ceph-mon[77081]: pgmap v1811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 14:28:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:26.915+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:27 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1420612954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:27.938+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:28.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:28 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:28 compute-2 ceph-mon[77081]: pgmap v1812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 586 KiB/s wr, 2 op/s
Jan 22 14:28:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:28.915+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:29 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:29.090 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:29 compute-2 nova_compute[226433]: 2026-01-22 14:28:29.699 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:29 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:29.904+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:30 compute-2 nova_compute[226433]: 2026-01-22 14:28:30.002 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:30.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:30 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:30 compute-2 ceph-mon[77081]: pgmap v1813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 587 KiB/s wr, 6 op/s
Jan 22 14:28:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:30.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:31 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:31.931+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:32.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:32 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:32 compute-2 ceph-mon[77081]: pgmap v1814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 671 KiB/s rd, 13 KiB/s wr, 31 op/s
Jan 22 14:28:32 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:32.961+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:33 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:33 compute-2 nova_compute[226433]: 2026-01-22 14:28:33.993 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating tmpfile /var/lib/nova/instances/tmpbphf1dve to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 22 14:28:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:33.997+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:34 compute-2 podman[248463]: 2026-01-22 14:28:34.043369085 +0000 UTC m=+0.097314801 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 14:28:34 compute-2 nova_compute[226433]: 2026-01-22 14:28:34.109 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 22 14:28:34 compute-2 nova_compute[226433]: 2026-01-22 14:28:34.135 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:28:34 compute-2 nova_compute[226433]: 2026-01-22 14:28:34.135 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:28:34 compute-2 nova_compute[226433]: 2026-01-22 14:28:34.143 226437 INFO nova.compute.rpcapi [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Jan 22 14:28:34 compute-2 nova_compute[226433]: 2026-01-22 14:28:34.143 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:28:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:34.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:34 compute-2 nova_compute[226433]: 2026-01-22 14:28:34.701 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:34.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:35 compute-2 nova_compute[226433]: 2026-01-22 14:28:35.003 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:35 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:35 compute-2 ceph-mon[77081]: pgmap v1815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 22 14:28:35 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:35 compute-2 nova_compute[226433]: 2026-01-22 14:28:35.961 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 22 14:28:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:35.987+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:35 compute-2 nova_compute[226433]: 2026-01-22 14:28:35.995 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:28:35 compute-2 nova_compute[226433]: 2026-01-22 14:28:35.996 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:28:35 compute-2 nova_compute[226433]: 2026-01-22 14:28:35.996 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:28:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:28:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:28:36 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:36 compute-2 ceph-mon[77081]: pgmap v1816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:28:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:36.942+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.055 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.080 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.082 226437 DEBUG os_brick.utils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.083 226437 INFO oslo.privsep.daemon [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpk2q2e022/privsep.sock']
Jan 22 14:28:37 compute-2 sudo[248495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:37 compute-2 sudo[248495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:37 compute-2 sudo[248495]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:37 compute-2 sudo[248521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:37 compute-2 sudo[248521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:37 compute-2 sudo[248521]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.777 226437 INFO oslo.privsep.daemon [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Spawned new privsep daemon via rootwrap
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.642 248518 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.645 248518 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.647 248518 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.647 248518 INFO oslo.privsep.daemon [-] privsep daemon running as pid 248518
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.781 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[b51b87aa-e072-4de2-a51e-a1a2d8671e38]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:37 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:37 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.872 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.885 248518 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.885 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[ec379100-8078-4500-9648-b963dd59b562]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.887 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.894 248518 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.894 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[56bf0476-a2ff-4cac-b5b8-4ca30389adfb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:5333c49f4ca5', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.896 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.904 248518 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.905 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[53037e6e-4382-4ca6-bb8c-73ef0e919028]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.907 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[320d483d-1614-470a-a218-7b9a3db44691]: (4, '5492a354-d192-4c48-8602-99be1884b049') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.907 226437 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.927 226437 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.930 226437 DEBUG os_brick.initiator.connectors.lightos [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.931 226437 DEBUG os_brick.initiator.connectors.lightos [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.931 226437 DEBUG os_brick.initiator.connectors.lightos [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 22 14:28:37 compute-2 nova_compute[226433]: 2026-01-22 14:28:37.931 226437 DEBUG os_brick.utils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] <== get_connector_properties: return (849ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:5333c49f4ca5', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '5492a354-d192-4c48-8602-99be1884b049', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 22 14:28:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:37.979+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:28:38 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342146323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:39.002+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:39 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:39 compute-2 ceph-mon[77081]: pgmap v1817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.257 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating instance directory: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Ensure instance console log exists: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.259 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.260 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:28:39 compute-2 systemd[1]: Starting libvirt secret daemon...
Jan 22 14:28:39 compute-2 systemd[1]: Started libvirt secret daemon.
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.317 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.318 226437 DEBUG nova.virt.libvirt.vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:29Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.319 226437 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.320 226437 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.320 226437 DEBUG os_vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.321 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.321 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.322 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.324 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.325 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b1b16d5-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.325 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b1b16d5-1e, col_values=(('external_ids', {'iface-id': '2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:af:b6', 'vm-uuid': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.327 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:39 compute-2 NetworkManager[49000]: <info>  [1769092119.3280] manager: (tap2b1b16d5-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.330 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.336 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.337 226437 INFO os_vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.340 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 22 14:28:39 compute-2 nova_compute[226433]: 2026-01-22 14:28:39.340 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 22 14:28:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:39.963+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:40 compute-2 nova_compute[226433]: 2026-01-22 14:28:40.005 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:40 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2342146323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:28:40 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:40.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:28:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:28:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:40.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:41 compute-2 nova_compute[226433]: 2026-01-22 14:28:41.578 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 22 14:28:41 compute-2 ceph-mon[77081]: pgmap v1818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 22 14:28:41 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:41 compute-2 nova_compute[226433]: 2026-01-22 14:28:41.816 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 22 14:28:41 compute-2 systemd[1]: Starting libvirt proxy daemon...
Jan 22 14:28:41 compute-2 systemd[1]: Started libvirt proxy daemon.
Jan 22 14:28:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:41.976+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:42 compute-2 kernel: tap2b1b16d5-1e: entered promiscuous mode
Jan 22 14:28:42 compute-2 NetworkManager[49000]: <info>  [1769092122.0982] manager: (tap2b1b16d5-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Jan 22 14:28:42 compute-2 ovn_controller[133156]: 2026-01-22T14:28:42Z|00045|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this additional chassis.
Jan 22 14:28:42 compute-2 ovn_controller[133156]: 2026-01-22T14:28:42Z|00046|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 14:28:42 compute-2 nova_compute[226433]: 2026-01-22 14:28:42.098 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:42 compute-2 systemd-udevd[248612]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 14:28:42 compute-2 systemd-machined[194970]: New machine qemu-4-instance-00000012.
Jan 22 14:28:42 compute-2 NetworkManager[49000]: <info>  [1769092122.1402] device (tap2b1b16d5-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 14:28:42 compute-2 NetworkManager[49000]: <info>  [1769092122.1408] device (tap2b1b16d5-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 14:28:42 compute-2 systemd[1]: Started Virtual Machine qemu-4-instance-00000012.
Jan 22 14:28:42 compute-2 nova_compute[226433]: 2026-01-22 14:28:42.163 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:42 compute-2 ovn_controller[133156]: 2026-01-22T14:28:42Z|00047|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b ovn-installed in OVS
Jan 22 14:28:42 compute-2 nova_compute[226433]: 2026-01-22 14:28:42.175 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:42 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:42 compute-2 ceph-mon[77081]: pgmap v1819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 22 14:28:42 compute-2 nova_compute[226433]: 2026-01-22 14:28:42.632 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092122.631511, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:28:42 compute-2 nova_compute[226433]: 2026-01-22 14:28:42.633 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Started (Lifecycle Event)
Jan 22 14:28:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:42 compute-2 nova_compute[226433]: 2026-01-22 14:28:42.661 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:28:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:42.967+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:43 compute-2 nova_compute[226433]: 2026-01-22 14:28:43.167 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092123.166783, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:28:43 compute-2 nova_compute[226433]: 2026-01-22 14:28:43.167 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Resumed (Lifecycle Event)
Jan 22 14:28:43 compute-2 nova_compute[226433]: 2026-01-22 14:28:43.193 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:28:43 compute-2 nova_compute[226433]: 2026-01-22 14:28:43.197 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:28:43 compute-2 nova_compute[226433]: 2026-01-22 14:28:43.215 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com
Jan 22 14:28:43 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:43 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:43.947+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:44 compute-2 nova_compute[226433]: 2026-01-22 14:28:44.329 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:44 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:44 compute-2 ceph-mon[77081]: pgmap v1820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 559 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 44 op/s
Jan 22 14:28:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:44.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.008 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:45 compute-2 ovn_controller[133156]: 2026-01-22T14:28:45Z|00048|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this chassis.
Jan 22 14:28:45 compute-2 ovn_controller[133156]: 2026-01-22T14:28:45Z|00049|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 14:28:45 compute-2 ovn_controller[133156]: 2026-01-22T14:28:45Z|00050|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b up in Southbound
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.577 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.579 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 bound to our chassis
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.582 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.594 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[52066b1a-6fe9-4c18-aab6-58b6914c6b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.595 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb247a422-e1 in ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.597 237689 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb247a422-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.597 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba2a967-91b3-4074-a876-42b8c3d97eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.598 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b6797-390e-415b-830e-cf2ec51a40cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.618 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[0e897fb3-d4b9-419c-880d-20c0615f6216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.648 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb8385b-ca4c-4a1e-b13e-d303a1cb377b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.683 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[32b09f63-cf0a-4542-bde8-1a4dd0492854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.690 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc08761-34c3-4c4a-b716-498dce99599b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 NetworkManager[49000]: <info>  [1769092125.6930] manager: (tapb247a422-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Jan 22 14:28:45 compute-2 systemd-udevd[248680]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.725 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7edf9b-7a9f-46a7-8af6-0169f2165c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.728 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4c5ec4-3f2a-4693-962c-faf8c46cae37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:45 compute-2 NetworkManager[49000]: <info>  [1769092125.7563] device (tapb247a422-e0): carrier: link connected
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.759 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5f9275-e823-46ed-bd48-0badccf82158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.777 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd804b9-dc7b-430e-832e-4b2fb395e0b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597646, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248701, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.788 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b949df6f-b966-4e30-997d-5bd9da7ed5b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:2b35'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597646, 'tstamp': 597646}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248702, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.799 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[f21d3b10-a00f-41be-a188-bcd31b543473]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597646, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248703, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.821 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6e2d79-38b3-48f2-9864-27d138e4fa30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.845 226437 INFO nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Post operation of migration started
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.884 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7d904f41-def3-429c-9d63-76fef3ab75af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.886 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.887 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.888 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb247a422-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:45.894+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.931 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:45 compute-2 NetworkManager[49000]: <info>  [1769092125.9321] manager: (tapb247a422-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 22 14:28:45 compute-2 kernel: tapb247a422-e0: entered promiscuous mode
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.937 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.938 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb247a422-e0, col_values=(('external_ids', {'iface-id': '9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:28:45 compute-2 ovn_controller[133156]: 2026-01-22T14:28:45Z|00051|binding|INFO|Releasing lport 9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a from this chassis (sb_readonly=0)
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.940 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.952 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:45 compute-2 nova_compute[226433]: 2026-01-22 14:28:45.956 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.957 143497 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.958 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[77252f62-7361-48d5-8459-2b0a29b47f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.958 143497 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: global
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     log         /dev/log local0 debug
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     log-tag     haproxy-metadata-proxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     user        root
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     group       root
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     maxconn     1024
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     pidfile     /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     daemon
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: defaults
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     log global
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     mode http
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     option httplog
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     option dontlognull
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     option http-server-close
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     option forwardfor
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     retries                 3
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     timeout http-request    30s
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     timeout connect         30s
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     timeout client          32s
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     timeout server          32s
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     timeout http-keep-alive 30s
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: listen listener
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     bind 169.254.169.254:80
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     server metadata /var/lib/neutron/metadata_proxy
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:     http-request add-header X-OVN-Network-ID b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 22 14:28:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.960 143497 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'env', 'PROCESS_TAG=haproxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b247a422-e88b-4d6e-9b42-d4947ce89ea4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 22 14:28:46 compute-2 nova_compute[226433]: 2026-01-22 14:28:46.219 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:28:46 compute-2 nova_compute[226433]: 2026-01-22 14:28:46.220 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:28:46 compute-2 nova_compute[226433]: 2026-01-22 14:28:46.220 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:28:46 compute-2 podman[248738]: 2026-01-22 14:28:46.321593817 +0000 UTC m=+0.053846987 container create 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 14:28:46 compute-2 systemd[1]: Started libpod-conmon-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b.scope.
Jan 22 14:28:46 compute-2 podman[248738]: 2026-01-22 14:28:46.293886663 +0000 UTC m=+0.026139863 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 14:28:46 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:28:46 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a4f7c6c9d491773a41a6ac99e5ad17b247e5c5f1025a81646d807b0889471c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 14:28:46 compute-2 podman[248738]: 2026-01-22 14:28:46.414432985 +0000 UTC m=+0.146686175 container init 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 14:28:46 compute-2 podman[248738]: 2026-01-22 14:28:46.420787019 +0000 UTC m=+0.153040189 container start 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:28:46 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : New worker (248759) forked
Jan 22 14:28:46 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : Loading success.
Jan 22 14:28:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:46.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:46 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:46 compute-2 ceph-mon[77081]: pgmap v1821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 22 14:28:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:46.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:47.203 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:28:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:28:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:28:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:28:47 compute-2 nova_compute[226433]: 2026-01-22 14:28:47.744 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:28:47 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:47.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:48.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:48.868+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:48 compute-2 nova_compute[226433]: 2026-01-22 14:28:48.911 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:28:48 compute-2 nova_compute[226433]: 2026-01-22 14:28:48.938 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:28:48 compute-2 nova_compute[226433]: 2026-01-22 14:28:48.938 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:28:48 compute-2 nova_compute[226433]: 2026-01-22 14:28:48.938 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:28:48 compute-2 nova_compute[226433]: 2026-01-22 14:28:48.943 226437 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 22 14:28:48 compute-2 virtqemud[225907]: Domain id=4 name='instance-00000012' uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 is tainted: custom-monitor
Jan 22 14:28:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:49 compute-2 ceph-mon[77081]: pgmap v1822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:49 compute-2 nova_compute[226433]: 2026-01-22 14:28:49.331 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:49 compute-2 nova_compute[226433]: 2026-01-22 14:28:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.645520) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129645623, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 251, "total_data_size": 3045702, "memory_usage": 3089088, "flush_reason": "Manual Compaction"}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129661701, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 1979886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52099, "largest_seqno": 53657, "table_properties": {"data_size": 1973541, "index_size": 3356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16179, "raw_average_key_size": 21, "raw_value_size": 1959839, "raw_average_value_size": 2561, "num_data_blocks": 145, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092027, "oldest_key_time": 1769092027, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 16277 microseconds, and 7423 cpu microseconds.
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.661814) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 1979886 bytes OK
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.661854) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.666218) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.666250) EVENT_LOG_v1 {"time_micros": 1769092129666242, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.666280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3038218, prev total WAL file size 3038218, number of live WAL files 2.
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.668000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(1933KB)], [102(9511KB)]
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129668059, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 11719524, "oldest_snapshot_seqno": -1}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 9614 keys, 10082978 bytes, temperature: kUnknown
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129741912, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 10082978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10028140, "index_size": 29702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 259579, "raw_average_key_size": 27, "raw_value_size": 9862066, "raw_average_value_size": 1025, "num_data_blocks": 1122, "num_entries": 9614, "num_filter_entries": 9614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.742170) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10082978 bytes
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.743485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.5 rd, 136.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 10131, records dropped: 517 output_compression: NoCompression
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.743505) EVENT_LOG_v1 {"time_micros": 1769092129743496, "job": 64, "event": "compaction_finished", "compaction_time_micros": 73917, "compaction_time_cpu_micros": 45131, "output_level": 6, "num_output_files": 1, "total_output_size": 10082978, "num_input_records": 10131, "num_output_records": 9614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129744373, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129746786, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.667902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:28:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:49.851+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:49 compute-2 nova_compute[226433]: 2026-01-22 14:28:49.950 226437 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 22 14:28:50 compute-2 nova_compute[226433]: 2026-01-22 14:28:50.011 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:50 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:50 compute-2 ceph-mon[77081]: pgmap v1823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:50.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:50 compute-2 nova_compute[226433]: 2026-01-22 14:28:50.956 226437 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 22 14:28:50 compute-2 nova_compute[226433]: 2026-01-22 14:28:50.963 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:28:51 compute-2 nova_compute[226433]: 2026-01-22 14:28:51.006 226437 DEBUG nova.objects.instance [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 22 14:28:51 compute-2 nova_compute[226433]: 2026-01-22 14:28:51.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:51 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:51 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:51.870+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:52 compute-2 nova_compute[226433]: 2026-01-22 14:28:52.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:52.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:52.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:52 compute-2 ceph-mon[77081]: pgmap v1824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 22 14:28:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3810589282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:52 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:52.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:53 compute-2 podman[248773]: 2026-01-22 14:28:53.016942046 +0000 UTC m=+0.061803094 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 14:28:53 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/820216801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:28:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:53.878+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:54 compute-2 nova_compute[226433]: 2026-01-22 14:28:54.254 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Check if temp file /var/lib/nova/instances/tmpwmqqt0dz exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 22 14:28:54 compute-2 nova_compute[226433]: 2026-01-22 14:28:54.255 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 22 14:28:54 compute-2 nova_compute[226433]: 2026-01-22 14:28:54.335 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:54 compute-2 nova_compute[226433]: 2026-01-22 14:28:54.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:54.895+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:55 compute-2 nova_compute[226433]: 2026-01-22 14:28:55.013 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:55 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:55 compute-2 ceph-mon[77081]: pgmap v1825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:55.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:56 compute-2 nova_compute[226433]: 2026-01-22 14:28:56.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:56 compute-2 nova_compute[226433]: 2026-01-22 14:28:56.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:28:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:56.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:56 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:56 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:56 compute-2 ceph-mon[77081]: pgmap v1826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 22 14:28:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:28:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:28:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:56.901+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:28:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:57 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:28:57 compute-2 sudo[248795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:57 compute-2 sudo[248795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:57 compute-2 sudo[248795]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.839 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.841 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.841 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:28:57 compute-2 nova_compute[226433]: 2026-01-22 14:28:57.841 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:28:57 compute-2 sudo[248820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:28:57 compute-2 sudo[248820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:28:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:57.864+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:57 compute-2 sudo[248820]: pam_unix(sudo:session): session closed for user root
Jan 22 14:28:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:58 compute-2 nova_compute[226433]: 2026-01-22 14:28:58.223 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:28:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:58.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:28:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:28:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:28:58 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:58 compute-2 ceph-mon[77081]: pgmap v1827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Jan 22 14:28:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:58.864+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:28:59 compute-2 nova_compute[226433]: 2026-01-22 14:28:59.040 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:28:59 compute-2 nova_compute[226433]: 2026-01-22 14:28:59.337 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:28:59 compute-2 nova_compute[226433]: 2026-01-22 14:28:59.378 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:28:59 compute-2 nova_compute[226433]: 2026-01-22 14:28:59.378 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:28:59 compute-2 nova_compute[226433]: 2026-01-22 14:28:59.379 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:59 compute-2 nova_compute[226433]: 2026-01-22 14:28:59.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:28:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:59.817+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:28:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:28:59 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3247440006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.014 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.491 226437 DEBUG nova.compute.manager [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.491 226437 DEBUG oslo_concurrency.lockutils [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.491 226437 DEBUG oslo_concurrency.lockutils [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.492 226437 DEBUG oslo_concurrency.lockutils [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.492 226437 DEBUG nova.compute.manager [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.492 226437 DEBUG nova.compute.manager [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.556 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.556 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.557 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.557 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:29:00 compute-2 nova_compute[226433]: 2026-01-22 14:29:00.557 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:29:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:00.810+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:29:01 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2544514787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.122 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:29:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:01 compute-2 ceph-mon[77081]: pgmap v1828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Jan 22 14:29:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.454 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.454 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.458 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.458 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.462 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.462 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.648 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.649 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4230MB free_disk=20.771652221679688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.649 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.650 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.791 226437 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 6.35 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.792 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.812 226437 INFO nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating resource usage from migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.823 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(2fc416ea-9e83-4513-bb8e-4a3040aca5b2),old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='430e38ad-b39f-4ad2-a8ef-a7940bd63b9e'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.827 226437 DEBUG nova.objects.instance [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.828 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 22 14:29:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:01.829+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.830 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.830 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.840 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.840 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.840 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.842 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.842 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.921 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Find same serial number: pos=1, serial=6e173a8e-fd98-4de4-a470-2c50f67a6d48 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.922 226437 DEBUG nova.virt.libvirt.vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:51Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.922 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.923 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.923 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating guest XML with vif config: <interface type="ethernet">
Jan 22 14:29:01 compute-2 nova_compute[226433]:   <mac address="fa:16:3e:f9:af:b6"/>
Jan 22 14:29:01 compute-2 nova_compute[226433]:   <model type="virtio"/>
Jan 22 14:29:01 compute-2 nova_compute[226433]:   <driver name="vhost" rx_queue_size="512"/>
Jan 22 14:29:01 compute-2 nova_compute[226433]:   <mtu size="1442"/>
Jan 22 14:29:01 compute-2 nova_compute[226433]:   <target dev="tap2b1b16d5-1e"/>
Jan 22 14:29:01 compute-2 nova_compute[226433]: </interface>
Jan 22 14:29:01 compute-2 nova_compute[226433]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.924 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 22 14:29:01 compute-2 nova_compute[226433]: 2026-01-22 14:29:01.998 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:29:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2544514787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1270814979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:02 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.334 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.334 226437 INFO nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.412 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 22 14:29:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:29:02 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3776743246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.437 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.442 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.516 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.555 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.555 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:02.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:02.854+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.994 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 22 14:29:02 compute-2 nova_compute[226433]: 2026-01-22 14:29:02.995 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 22 14:29:03 compute-2 ceph-mon[77081]: pgmap v1829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Jan 22 14:29:03 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3776743246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2182985443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.353 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092143.3532639, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.354 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Paused (Lifecycle Event)
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.370 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.370 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.370 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 WARNING nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing instance network info cache due to event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG nova.network.neutron [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.374 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.377 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.403 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 22 14:29:03 compute-2 kernel: tap2b1b16d5-1e (unregistering): left promiscuous mode
Jan 22 14:29:03 compute-2 NetworkManager[49000]: <info>  [1769092143.5524] device (tap2b1b16d5-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 14:29:03 compute-2 ovn_controller[133156]: 2026-01-22T14:29:03Z|00052|binding|INFO|Releasing lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b from this chassis (sb_readonly=0)
Jan 22 14:29:03 compute-2 ovn_controller[133156]: 2026-01-22T14:29:03Z|00053|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b down in Southbound
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.581 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:03 compute-2 ovn_controller[133156]: 2026-01-22T14:29:03Z|00054|binding|INFO|Removing iface tap2b1b16d5-1e ovn-installed in OVS
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.583 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.588 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '7335e41f-b1b8-4c04-9c19-8788162d5bb4'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '18', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.589 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 unbound from our chassis
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.592 143497 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b247a422-e88b-4d6e-9b42-d4947ce89ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.594 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a765db-556b-400d-b707-11fb8f0b7907]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.595 143497 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace which is not needed anymore
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.609 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:03 compute-2 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 22 14:29:03 compute-2 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000012.scope: Consumed 2.080s CPU time.
Jan 22 14:29:03 compute-2 systemd-machined[194970]: Machine qemu-4-instance-00000012 terminated.
Jan 22 14:29:03 compute-2 virtqemud[225907]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48: No such file or directory
Jan 22 14:29:03 compute-2 virtqemud[225907]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48: No such file or directory
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.735 226437 DEBUG nova.virt.libvirt.guest [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.736 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation has completed
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.736 226437 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] _post_live_migration() is started..
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.737 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.737 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.737 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 22 14:29:03 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : haproxy version is 2.8.14-c23fe91
Jan 22 14:29:03 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : path to executable is /usr/sbin/haproxy
Jan 22 14:29:03 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [WARNING]  (248757) : Exiting Master process...
Jan 22 14:29:03 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [ALERT]    (248757) : Current worker (248759) exited with code 143 (Terminated)
Jan 22 14:29:03 compute-2 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [WARNING]  (248757) : All workers exited. Exiting... (0)
Jan 22 14:29:03 compute-2 systemd[1]: libpod-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b.scope: Deactivated successfully.
Jan 22 14:29:03 compute-2 podman[248924]: 2026-01-22 14:29:03.769958997 +0000 UTC m=+0.060919766 container died 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:29:03 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b-userdata-shm.mount: Deactivated successfully.
Jan 22 14:29:03 compute-2 systemd[1]: var-lib-containers-storage-overlay-30a4f7c6c9d491773a41a6ac99e5ad17b247e5c5f1025a81646d807b0889471c-merged.mount: Deactivated successfully.
Jan 22 14:29:03 compute-2 podman[248924]: 2026-01-22 14:29:03.816821692 +0000 UTC m=+0.107782411 container cleanup 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:29:03 compute-2 systemd[1]: libpod-conmon-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b.scope: Deactivated successfully.
Jan 22 14:29:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:03.872+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:03 compute-2 podman[248971]: 2026-01-22 14:29:03.875268114 +0000 UTC m=+0.037427223 container remove 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.881 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[bec22f60-683d-492d-bd25-b6f39ac9c8a2]: (4, ('Thu Jan 22 02:29:03 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b)\n3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b\nThu Jan 22 02:29:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b)\n3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.882 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0892fd70-b84e-414b-b826-8f951fb39883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.883 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.884 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:03 compute-2 kernel: tapb247a422-e0: left promiscuous mode
Jan 22 14:29:03 compute-2 nova_compute[226433]: 2026-01-22 14:29:03.904 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.907 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[25afedba-6a56-4ded-9041-ff040eda79c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.918 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaf4b66-ea25-4f91-adc4-016bfcb97a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.919 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[439ede5f-3505-441f-8007-6c427d52773b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.931 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[dc570c2f-cc50-4a0b-8d51-03c69e3aa01b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597638, 'reachable_time': 15837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248991, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 systemd[1]: run-netns-ovnmeta\x2db247a422\x2de88b\x2d4d6e\x2d9b42\x2dd4947ce89ea4.mount: Deactivated successfully.
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.934 143856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 22 14:29:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.934 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[3b81f5ab-0c77-438e-aca6-144f88aadd41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:29:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:04 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.338 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:04 compute-2 sshd-session[248952]: Invalid user ubuntu from 92.118.39.95 port 48332
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.416 226437 DEBUG nova.compute.manager [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG oslo_concurrency.lockutils [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG oslo_concurrency.lockutils [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG oslo_concurrency.lockutils [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG nova.compute.manager [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.418 226437 DEBUG nova.compute.manager [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 22 14:29:04 compute-2 podman[248993]: 2026-01-22 14:29:04.480436716 +0000 UTC m=+0.089303126 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:29:04 compute-2 sshd-session[248952]: Connection closed by invalid user ubuntu 92.118.39.95 port 48332 [preauth]
Jan 22 14:29:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:04.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.901 226437 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Activated binding for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.902 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.903 226437 DEBUG nova.virt.libvirt.vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:53Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.903 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.905 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.905 226437 DEBUG os_vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.908 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.908 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b1b16d5-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.911 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.914 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.920 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.924 226437 INFO os_vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.924 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.925 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.925 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.925 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.926 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deleting instance files /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del
Jan 22 14:29:04 compute-2 nova_compute[226433]: 2026-01-22 14:29:04.927 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deletion of /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del complete
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.005 226437 DEBUG nova.compute.manager [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.006 226437 DEBUG oslo_concurrency.lockutils [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.006 226437 DEBUG oslo_concurrency.lockutils [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.007 226437 DEBUG oslo_concurrency.lockutils [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.007 226437 DEBUG nova.compute.manager [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.007 226437 DEBUG nova.compute.manager [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.017 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.301 226437 DEBUG nova.network.neutron [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updated VIF entry in instance network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.301 226437 DEBUG nova.network.neutron [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:29:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:05 compute-2 ceph-mon[77081]: pgmap v1830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:29:05 compute-2 nova_compute[226433]: 2026-01-22 14:29:05.480 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:29:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:05.809+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.550 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.556 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:29:06 compute-2 nova_compute[226433]: 2026-01-22 14:29:06.556 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.
Jan 22 14:29:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:06.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:06.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:06.816+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:07 compute-2 ceph-mon[77081]: pgmap v1831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:07 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:07.825+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:08 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:08.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:08.852+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:09 compute-2 ceph-mon[77081]: pgmap v1832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:09.840+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:09 compute-2 nova_compute[226433]: 2026-01-22 14:29:09.912 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:10 compute-2 nova_compute[226433]: 2026-01-22 14:29:10.018 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:10 compute-2 ceph-mon[77081]: pgmap v1833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:10.805+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:11 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:11.779+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.832 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.832 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.832 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.892 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.892 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.893 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.893 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:29:11 compute-2 nova_compute[226433]: 2026-01-22 14:29:11.893 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:29:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:29:12 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3324759149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.355 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.464 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.464 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.468 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.468 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:29:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:12 compute-2 ceph-mon[77081]: pgmap v1834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:12 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3324759149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.643 226437 WARNING nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.645 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4388MB free_disk=20.771652221679688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.645 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.646 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.748 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Migration for instance 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 22 14:29:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:12.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.800 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.832 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:29:12 compute-2 nova_compute[226433]: 2026-01-22 14:29:12.971 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:29:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:29:13 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/731787681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.452 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.459 226437 DEBUG nova.compute.provider_tree [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.484 226437 DEBUG nova.scheduler.client.report [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.521 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.521 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.526 226437 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migrating instance to compute-0.ctlplane.example.com finished successfully.
Jan 22 14:29:13 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:13 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/731787681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.675 226437 INFO nova.scheduler.client.report [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Deleted allocation for migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2
Jan 22 14:29:13 compute-2 nova_compute[226433]: 2026-01-22 14:29:13.675 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 22 14:29:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:13.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:14.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:14 compute-2 ceph-mon[77081]: pgmap v1835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 592 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 22 14:29:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:14.766+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:14 compute-2 nova_compute[226433]: 2026-01-22 14:29:14.914 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:15 compute-2 nova_compute[226433]: 2026-01-22 14:29:15.020 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:15 compute-2 nova_compute[226433]: 2026-01-22 14:29:15.028 226437 DEBUG oslo_concurrency.lockutils [None req-948507e0-498f-43bb-aede-57b100eccc71 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:29:15 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:29:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:29:15 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:29:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:15.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:15 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:29:15 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:29:16 compute-2 nova_compute[226433]: 2026-01-22 14:29:16.338 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:16.338 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:29:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:16.339 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:29:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:16.749+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:16 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:16 compute-2 ceph-mon[77081]: pgmap v1836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 541 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Jan 22 14:29:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:17.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:17 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:17 compute-2 sudo[249074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:17 compute-2 sudo[249074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:17 compute-2 sudo[249074]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:18 compute-2 sudo[249099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:18 compute-2 sudo[249099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:18 compute-2 sudo[249099]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:18 compute-2 nova_compute[226433]: 2026-01-22 14:29:18.735 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092143.733853, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:29:18 compute-2 nova_compute[226433]: 2026-01-22 14:29:18.736 226437 INFO nova.compute.manager [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Stopped (Lifecycle Event)
Jan 22 14:29:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:18.745+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:18 compute-2 nova_compute[226433]: 2026-01-22 14:29:18.797 226437 DEBUG nova.compute.manager [None req-5fce806e-e6a3-4ddf-9ddb-50be8da55f5d - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:29:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:18 compute-2 ceph-mon[77081]: pgmap v1837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 541 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 938 B/s wr, 27 op/s
Jan 22 14:29:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:29:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2512728204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:29:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:19.775+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:19 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:19 compute-2 nova_compute[226433]: 2026-01-22 14:29:19.916 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:20 compute-2 nova_compute[226433]: 2026-01-22 14:29:20.022 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:20.794+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:20 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:20 compute-2 ceph-mon[77081]: pgmap v1838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:21 compute-2 nova_compute[226433]: 2026-01-22 14:29:21.560 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:21.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:21 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:22.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:22 compute-2 ceph-mon[77081]: pgmap v1839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:22 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:23.341 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:29:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:23.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:23 compute-2 podman[249127]: 2026-01-22 14:29:23.99153615 +0000 UTC m=+0.055434876 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:29:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:24 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:24.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:24.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:24 compute-2 nova_compute[226433]: 2026-01-22 14:29:24.918 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:25 compute-2 nova_compute[226433]: 2026-01-22 14:29:25.024 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:25 compute-2 ceph-mon[77081]: pgmap v1840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:25 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:25.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:26 compute-2 sudo[249147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:26 compute-2 sudo[249147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-2 sudo[249147]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:26 compute-2 sudo[249172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:29:26 compute-2 sudo[249172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-2 sudo[249172]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:26 compute-2 sudo[249197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:26 compute-2 sudo[249197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-2 sudo[249197]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:26 compute-2 sudo[249222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:29:26 compute-2 sudo[249222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:26 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:26 compute-2 ceph-mon[77081]: pgmap v1841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 22 14:29:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:26.764+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:26 compute-2 sudo[249222]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:27 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:29:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:29:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:29:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:29:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:29:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:29:27 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:27.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:28 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:28 compute-2 ceph-mon[77081]: pgmap v1842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 2 op/s
Jan 22 14:29:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:28.756+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:29 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:29.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:29 compute-2 nova_compute[226433]: 2026-01-22 14:29:29.921 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:30 compute-2 nova_compute[226433]: 2026-01-22 14:29:30.026 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:30.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:30 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:30 compute-2 ceph-mon[77081]: pgmap v1843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 2 op/s
Jan 22 14:29:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:30.833+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:31.788+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:31 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:29:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:29:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:32.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:32 compute-2 sudo[249283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:32 compute-2 sudo[249283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:32 compute-2 sudo[249283]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:33 compute-2 sudo[249308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:29:33 compute-2 sudo[249308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:33 compute-2 sudo[249308]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:33 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:33 compute-2 ceph-mon[77081]: pgmap v1844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:29:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:29:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:33.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:34 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:34 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:34 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:34.761+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:34 compute-2 nova_compute[226433]: 2026-01-22 14:29:34.924 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:35 compute-2 nova_compute[226433]: 2026-01-22 14:29:35.027 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:35 compute-2 podman[249334]: 2026-01-22 14:29:35.02835462 +0000 UTC m=+0.091069411 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:29:35 compute-2 ceph-mon[77081]: pgmap v1845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:35 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:35.761+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:36 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:29:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:29:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:36.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:37 compute-2 ceph-mon[77081]: pgmap v1846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:37 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:37.755+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:38 compute-2 sudo[249362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:38 compute-2 sudo[249362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:38 compute-2 sudo[249362]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:38 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:38 compute-2 sudo[249387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:38 compute-2 sudo[249387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:38 compute-2 sudo[249387]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:38.791+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:39 compute-2 ceph-mon[77081]: pgmap v1847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:39 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:39.835+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:39 compute-2 nova_compute[226433]: 2026-01-22 14:29:39.926 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:40 compute-2 nova_compute[226433]: 2026-01-22 14:29:40.029 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:40 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:29:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:40.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:29:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:40.806+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:41 compute-2 ceph-mon[77081]: pgmap v1848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:41 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:41.824+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:42.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:42 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:42 compute-2 ceph-mon[77081]: pgmap v1849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:42.831+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:43 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:43 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:43.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:44 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:44 compute-2 ceph-mon[77081]: pgmap v1850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:44.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:44.772+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:44 compute-2 nova_compute[226433]: 2026-01-22 14:29:44.927 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:45 compute-2 nova_compute[226433]: 2026-01-22 14:29:45.031 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:45 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:45.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:46 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:46 compute-2 ceph-mon[77081]: pgmap v1851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:46.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:46.828+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:29:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:29:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:29:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:29:47 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:47 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:47.861+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:29:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:29:48 compute-2 ceph-mon[77081]: pgmap v1852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:48.820+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:49 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:49.783+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:49 compute-2 nova_compute[226433]: 2026-01-22 14:29:49.929 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:50 compute-2 nova_compute[226433]: 2026-01-22 14:29:50.034 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:50 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:50 compute-2 ceph-mon[77081]: pgmap v1853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:50.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:51 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:51.830+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:52 compute-2 nova_compute[226433]: 2026-01-22 14:29:52.556 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:52 compute-2 nova_compute[226433]: 2026-01-22 14:29:52.557 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:52.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:52.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:52 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:52 compute-2 ceph-mon[77081]: pgmap v1854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:52.817+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:53 compute-2 nova_compute[226433]: 2026-01-22 14:29:53.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:53 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:29:53 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:53.831+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:54.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:54 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:54 compute-2 ceph-mon[77081]: pgmap v1855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:54.851+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:54 compute-2 nova_compute[226433]: 2026-01-22 14:29:54.932 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:55 compute-2 podman[249423]: 2026-01-22 14:29:55.014815245 +0000 UTC m=+0.068663266 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:29:55 compute-2 nova_compute[226433]: 2026-01-22 14:29:55.037 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:29:55 compute-2 nova_compute[226433]: 2026-01-22 14:29:55.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:55 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:55.874+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:56.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:56 compute-2 ceph-mon[77081]: pgmap v1856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:56.849+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:57 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:57.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:58 compute-2 sudo[249445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:58 compute-2 sudo[249445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:58 compute-2 sudo[249445]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:58 compute-2 sudo[249470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:29:58 compute-2 sudo[249470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:29:58 compute-2 sudo[249470]: pam_unix(sudo:session): session closed for user root
Jan 22 14:29:58 compute-2 nova_compute[226433]: 2026-01-22 14:29:58.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:58 compute-2 nova_compute[226433]: 2026-01-22 14:29:58.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:29:58 compute-2 ovn_controller[133156]: 2026-01-22T14:29:58Z|00055|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 22 14:29:58 compute-2 nova_compute[226433]: 2026-01-22 14:29:58.585 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 22 14:29:58 compute-2 nova_compute[226433]: 2026-01-22 14:29:58.587 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:58 compute-2 nova_compute[226433]: 2026-01-22 14:29:58.587 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:58 compute-2 nova_compute[226433]: 2026-01-22 14:29:58.588 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:29:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:29:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:29:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:29:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:29:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:29:58 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:58 compute-2 ceph-mon[77081]: pgmap v1857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:29:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:58.876+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:29:59 compute-2 nova_compute[226433]: 2026-01-22 14:29:59.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:29:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:59.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:29:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:59 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:29:59 compute-2 nova_compute[226433]: 2026-01-22 14:29:59.935 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.039 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.548 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.549 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.549 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.549 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:30:00 compute-2 nova_compute[226433]: 2026-01-22 14:30:00.550 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:30:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:00.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:00.870+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 14:30:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 14:30:00 compute-2 ceph-mon[77081]: pgmap v1858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:30:01 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1158373973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.017 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.264 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.265 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.268 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.269 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.434 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.435 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4389MB free_disk=20.77179718017578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.435 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.435 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.600 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.600 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.602 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:30:01 compute-2 nova_compute[226433]: 2026-01-22 14:30:01.771 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:30:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:01.856+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:01 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1158373973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:01 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:30:02 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1940122581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:02 compute-2 nova_compute[226433]: 2026-01-22 14:30:02.249 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:30:02 compute-2 nova_compute[226433]: 2026-01-22 14:30:02.255 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:30:02 compute-2 nova_compute[226433]: 2026-01-22 14:30:02.331 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:30:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:02 compute-2 nova_compute[226433]: 2026-01-22 14:30:02.701 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:30:02 compute-2 nova_compute[226433]: 2026-01-22 14:30:02.701 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:30:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:02.835+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:02 compute-2 ceph-mon[77081]: pgmap v1859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1940122581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:02 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:02 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:03.816+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/901878159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:03 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:04.779+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:04 compute-2 nova_compute[226433]: 2026-01-22 14:30:04.938 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:04 compute-2 ceph-mon[77081]: pgmap v1860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:04 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3235649745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:04 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:05 compute-2 nova_compute[226433]: 2026-01-22 14:30:05.040 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:05.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:06 compute-2 podman[249543]: 2026-01-22 14:30:06.041053372 +0000 UTC m=+0.105307337 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 14:30:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:06.785+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:07 compute-2 ceph-mon[77081]: pgmap v1861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:07 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:07 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:30:07.244 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:30:07 compute-2 nova_compute[226433]: 2026-01-22 14:30:07.244 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:07 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:30:07.245 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:30:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:07.817+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:08 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:08.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:08.820+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:09 compute-2 ceph-mon[77081]: pgmap v1862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:09.870+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:09 compute-2 nova_compute[226433]: 2026-01-22 14:30:09.941 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:10 compute-2 nova_compute[226433]: 2026-01-22 14:30:10.043 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:10.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:10.866+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:11 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:11 compute-2 ceph-mon[77081]: pgmap v1863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:11.852+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:12 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:30:12.246 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:30:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:12.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:12.892+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:13 compute-2 ceph-mon[77081]: pgmap v1864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:13 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:13.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:30:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:14 compute-2 nova_compute[226433]: 2026-01-22 14:30:14.697 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:14.934+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:14 compute-2 nova_compute[226433]: 2026-01-22 14:30:14.945 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:15 compute-2 nova_compute[226433]: 2026-01-22 14:30:15.047 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:15 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:15 compute-2 ceph-mon[77081]: pgmap v1865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:15.982+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:16 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:16.988+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:17 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:17 compute-2 ceph-mon[77081]: pgmap v1866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:17.950+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:18 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:18 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:18 compute-2 sudo[249577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:18 compute-2 sudo[249577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:18 compute-2 sudo[249577]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:18 compute-2 sudo[249602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:18 compute-2 sudo[249602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:18 compute-2 sudo[249602]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:18.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:18.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:18.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:19 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:19 compute-2 ceph-mon[77081]: pgmap v1867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2415948044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:30:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2415948044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:30:19 compute-2 nova_compute[226433]: 2026-01-22 14:30:19.948 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:19.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:20 compute-2 nova_compute[226433]: 2026-01-22 14:30:20.048 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:20 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:20.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:20.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:20.986+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:21 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:21 compute-2 ceph-mon[77081]: pgmap v1868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:21.986+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:22 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:22.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:22.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:23 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:23 compute-2 ceph-mon[77081]: pgmap v1869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:23 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:23.899+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:24 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:24.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:24 compute-2 nova_compute[226433]: 2026-01-22 14:30:24.950 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:25 compute-2 nova_compute[226433]: 2026-01-22 14:30:25.049 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:25 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:25 compute-2 ceph-mon[77081]: pgmap v1870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:25.892+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:26 compute-2 podman[249631]: 2026-01-22 14:30:26.007475096 +0000 UTC m=+0.062600140 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:30:26 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:26.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:26.941+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:27 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:27 compute-2 ceph-mon[77081]: pgmap v1871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:27.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:28 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:28.971+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:29 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:29 compute-2 ceph-mon[77081]: pgmap v1872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:29.928+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:29 compute-2 nova_compute[226433]: 2026-01-22 14:30:29.952 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:30 compute-2 nova_compute[226433]: 2026-01-22 14:30:30.051 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:30 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:30 compute-2 ceph-mon[77081]: pgmap v1873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:30.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:31 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:31 compute-2 sshd-session[249652]: Invalid user ubuntu from 45.148.10.240 port 46246
Jan 22 14:30:31 compute-2 sshd-session[249652]: Connection closed by invalid user ubuntu 45.148.10.240 port 46246 [preauth]
Jan 22 14:30:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:31.849+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:32 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:32 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:32 compute-2 ceph-mon[77081]: pgmap v1874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:32.876+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:33 compute-2 sudo[249655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:33 compute-2 sudo[249655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-2 sudo[249655]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-2 sudo[249680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:30:33 compute-2 sudo[249680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-2 sudo[249680]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-2 sudo[249705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:33 compute-2 sudo[249705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-2 sudo[249705]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-2 sudo[249730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:30:33 compute-2 sudo[249730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:33 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:33 compute-2 sudo[249730]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:33.914+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:34 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:30:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:30:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:30:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:30:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:30:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:30:34 compute-2 ceph-mon[77081]: pgmap v1875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:34.916+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:34 compute-2 nova_compute[226433]: 2026-01-22 14:30:34.955 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:35 compute-2 nova_compute[226433]: 2026-01-22 14:30:35.053 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:35 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:35.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:36 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:36 compute-2 ceph-mon[77081]: pgmap v1876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:36.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:37.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:37 compute-2 podman[249789]: 2026-01-22 14:30:37.065243384 +0000 UTC m=+0.120809776 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 14:30:37 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:38.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:38 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:38 compute-2 ceph-mon[77081]: pgmap v1877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:38 compute-2 sudo[249815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:38 compute-2 sudo[249815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:38 compute-2 sudo[249815]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:38 compute-2 sudo[249840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:38 compute-2 sudo[249840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:38 compute-2 sudo[249840]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:38.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:38.972+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:39 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:39 compute-2 nova_compute[226433]: 2026-01-22 14:30:39.958 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:39.957+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:40 compute-2 nova_compute[226433]: 2026-01-22 14:30:40.056 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:40 compute-2 sudo[249866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:40 compute-2 sudo[249866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:40 compute-2 sudo[249866]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:40 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:40 compute-2 ceph-mon[77081]: pgmap v1878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:30:40 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:30:40 compute-2 sudo[249891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:30:40 compute-2 sudo[249891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:40 compute-2 sudo[249891]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:40.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:41 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:41.971+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:42 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:42 compute-2 ceph-mon[77081]: pgmap v1879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:42 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:42.939+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:43 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 14:30:43 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:43.893+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:44 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:30:44 compute-2 ceph-mon[77081]: pgmap v1880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:44.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:44.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:44 compute-2 nova_compute[226433]: 2026-01-22 14:30:44.962 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:45 compute-2 nova_compute[226433]: 2026-01-22 14:30:45.057 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:45 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:45.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:46 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:46 compute-2 ceph-mon[77081]: pgmap v1881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:47.003+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:30:47.206 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:30:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:30:47.206 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:30:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:30:47.206 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:30:47 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:47 compute-2 ceph-mon[77081]: Health check update: 31 slow ops, oldest one blocked for 3238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:48.042+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:48 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:48 compute-2 ceph-mon[77081]: pgmap v1882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:30:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:30:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:49.063+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:49 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:49 compute-2 nova_compute[226433]: 2026-01-22 14:30:49.964 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:50.046+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:50 compute-2 nova_compute[226433]: 2026-01-22 14:30:50.058 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:50 compute-2 nova_compute[226433]: 2026-01-22 14:30:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:50 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:50 compute-2 ceph-mon[77081]: pgmap v1883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:51.014+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:51 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:52.055+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:52 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:52 compute-2 ceph-mon[77081]: pgmap v1884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:53.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:53 compute-2 nova_compute[226433]: 2026-01-22 14:30:53.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:53 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:53 compute-2 ceph-mon[77081]: Health check update: 31 slow ops, oldest one blocked for 3243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:30:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:54.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:54 compute-2 nova_compute[226433]: 2026-01-22 14:30:54.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:54.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:54 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:54 compute-2 ceph-mon[77081]: pgmap v1885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:54 compute-2 nova_compute[226433]: 2026-01-22 14:30:54.967 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:54.989+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:55 compute-2 nova_compute[226433]: 2026-01-22 14:30:55.061 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:30:55 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:56.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:30:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:30:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:56 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:56 compute-2 ceph-mon[77081]: pgmap v1886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:56 compute-2 podman[249927]: 2026-01-22 14:30:56.997899599 +0000 UTC m=+0.053717321 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:30:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:57.066+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:57 compute-2 nova_compute[226433]: 2026-01-22 14:30:57.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:57 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:57 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3697023630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:58.108+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.545 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.566 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "001ba9a6-ba0c-438d-8150-5cfbcec3d34f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.566 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "001ba9a6-ba0c-438d-8150-5cfbcec3d34f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:30:58 compute-2 nova_compute[226433]: 2026-01-22 14:30:58.582 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:30:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:58.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:30:58 compute-2 sudo[249948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:30:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:30:58 compute-2 sudo[249948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:58 compute-2 sudo[249948]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:58 compute-2 sudo[249973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:30:58 compute-2 sudo[249973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:30:58 compute-2 sudo[249973]: pam_unix(sudo:session): session closed for user root
Jan 22 14:30:58 compute-2 ceph-mon[77081]: pgmap v1887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 513 MiB data, 485 MiB used, 21 GiB / 21 GiB avail
Jan 22 14:30:58 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.078 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.079 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.086 226437 DEBUG nova.virt.hardware [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.086 226437 INFO nova.compute.claims [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:30:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:59.128+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:30:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:30:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.334 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.481 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.611 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.628 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.629 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.629 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.629 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.630 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.630 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:30:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:30:59 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/368227021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.896 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.903 226437 DEBUG nova.compute.provider_tree [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:30:59 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:30:59 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/368227021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.928 226437 DEBUG nova.scheduler.client.report [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.955 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.956 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:30:59 compute-2 nova_compute[226433]: 2026-01-22 14:30:59.970 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.007 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.008 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.028 226437 INFO nova.virt.libvirt.driver [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.046 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.062 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.136 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.137 226437 DEBUG nova.virt.libvirt.driver [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.137 226437 INFO nova.virt.libvirt.driver [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Creating image(s)
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.162 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:31:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:00.165+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.186 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.209 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.212 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.232 226437 DEBUG nova.policy [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '112b71a99add4ffeb28392e66d1a3d24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06252abc0be74ac08438db3d2f76db14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.272 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.272 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.273 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.273 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.294 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:31:00 compute-2 nova_compute[226433]: 2026-01-22 14:31:00.297 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:31:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:31:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:31:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:00 compute-2 ceph-mon[77081]: pgmap v1888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 527 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 291 KiB/s wr, 12 op/s
Jan 22 14:31:00 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.152 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Successfully created port: ecd36baa-6fcf-48f7-a5a5-0e085089f614 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 22 14:31:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:01.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.542 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.542 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:31:01 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:31:01 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/475015090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.955 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Successfully updated port: ecd36baa-6fcf-48f7-a5a5-0e085089f614 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.971 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.976 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.976 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquired lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:31:01 compute-2 nova_compute[226433]: 2026-01-22 14:31:01.976 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.037 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.037 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.040 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.040 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.066 226437 DEBUG nova.compute.manager [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Received event network-changed-ecd36baa-6fcf-48f7-a5a5-0e085089f614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.067 226437 DEBUG nova.compute.manager [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Refreshing instance network info cache due to event network-changed-ecd36baa-6fcf-48f7-a5a5-0e085089f614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.067 226437 DEBUG oslo_concurrency.lockutils [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:31:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:02.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.218 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.221 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4327MB free_disk=20.768470764160156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.222 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.222 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.309 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.309 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.310 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.310 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.311 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.311 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.311 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.312 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.312 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=20GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:31:02 compute-2 nova_compute[226433]: 2026-01-22 14:31:02.625 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:31:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/475015090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:02 compute-2 ceph-mon[77081]: pgmap v1889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 579 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 28 op/s
Jan 22 14:31:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/141403899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:31:02 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:02 compute-2 ceph-mon[77081]: Health check update: 31 slow ops, oldest one blocked for 3248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:31:03 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/59182803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.074 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.081 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.087 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.111 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.141 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.142 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.143 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.144 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.164 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 22 14:31:03 compute-2 nova_compute[226433]: 2026-01-22 14:31:03.164 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:03.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/59182803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:31:03 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:04.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:31:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:31:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:04 compute-2 nova_compute[226433]: 2026-01-22 14:31:04.974 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:04 compute-2 ceph-mon[77081]: pgmap v1890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 579 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 28 op/s
Jan 22 14:31:04 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:05 compute-2 nova_compute[226433]: 2026-01-22 14:31:05.064 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:05.260+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:06 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:06.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:06 compute-2 nova_compute[226433]: 2026-01-22 14:31:06.615 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Updating instance_info_cache with network_info: [{"id": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "address": "fa:16:3e:8c:dd:7e", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecd36baa-6f", "ovs_interfaceid": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:31:06 compute-2 nova_compute[226433]: 2026-01-22 14:31:06.665 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Releasing lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:31:06 compute-2 nova_compute[226433]: 2026-01-22 14:31:06.665 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Instance network_info: |[{"id": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "address": "fa:16:3e:8c:dd:7e", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecd36baa-6f", "ovs_interfaceid": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:31:06 compute-2 nova_compute[226433]: 2026-01-22 14:31:06.667 226437 DEBUG oslo_concurrency.lockutils [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:31:06 compute-2 nova_compute[226433]: 2026-01-22 14:31:06.667 226437 DEBUG nova.network.neutron [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Refreshing network info cache for port ecd36baa-6fcf-48f7-a5a5-0e085089f614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:31:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:07 compute-2 ceph-mon[77081]: pgmap v1891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 14:31:07 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:07.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:08 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:08 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:08 compute-2 podman[250162]: 2026-01-22 14:31:08.05223005 +0000 UTC m=+0.103302002 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:31:08 compute-2 nova_compute[226433]: 2026-01-22 14:31:08.280 226437 DEBUG nova.network.neutron [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Updated VIF entry in instance network info cache for port ecd36baa-6fcf-48f7-a5a5-0e085089f614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:31:08 compute-2 nova_compute[226433]: 2026-01-22 14:31:08.280 226437 DEBUG nova.network.neutron [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Updating instance_info_cache with network_info: [{"id": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "address": "fa:16:3e:8c:dd:7e", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecd36baa-6f", "ovs_interfaceid": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:31:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:08.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:08 compute-2 nova_compute[226433]: 2026-01-22 14:31:08.356 226437 DEBUG oslo_concurrency.lockutils [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:31:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:08.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:09 compute-2 ceph-mon[77081]: pgmap v1892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 14:31:09 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:09.284+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:09 compute-2 nova_compute[226433]: 2026-01-22 14:31:09.977 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:10 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:10 compute-2 nova_compute[226433]: 2026-01-22 14:31:10.066 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:10.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:10.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:11 compute-2 ceph-mon[77081]: pgmap v1893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 22 14:31:11 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:11.272+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:12.306+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:12 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:12.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:13.354+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:13 compute-2 ceph-mon[77081]: pgmap v1894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 3.0 MiB/s wr, 30 op/s
Jan 22 14:31:13 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:14.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:14.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:14 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:14 compute-2 ceph-mon[77081]: pgmap v1895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 22 14:31:15 compute-2 nova_compute[226433]: 2026-01-22 14:31:15.024 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:15 compute-2 nova_compute[226433]: 2026-01-22 14:31:15.068 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:15.440+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:15 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:16.457+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:16.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:16.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:16 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:16 compute-2 ceph-mon[77081]: pgmap v1896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 22 14:31:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:17.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:17 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:17 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:18.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:18.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:18 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:18 compute-2 ceph-mon[77081]: pgmap v1897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:31:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4050486163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:31:19 compute-2 sudo[250194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:19 compute-2 sudo[250194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:19 compute-2 sudo[250194]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:19 compute-2 sudo[250219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:19 compute-2 sudo[250219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:19 compute-2 sudo[250219]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:19.416+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:19 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:19 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:20 compute-2 nova_compute[226433]: 2026-01-22 14:31:20.027 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:20 compute-2 nova_compute[226433]: 2026-01-22 14:31:20.071 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:20.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:20 compute-2 ceph-mon[77081]: pgmap v1898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:20 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:21.404+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:21 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:22.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:22.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:22 compute-2 ceph-mon[77081]: pgmap v1899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:22 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:23.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:23 compute-2 nova_compute[226433]: 2026-01-22 14:31:23.531 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:23 compute-2 nova_compute[226433]: 2026-01-22 14:31:23.531 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 22 14:31:24 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:24 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:24.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:24 compute-2 sshd-session[250246]: Invalid user ubuntu from 92.118.39.95 port 55562
Jan 22 14:31:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:24 compute-2 sshd-session[250246]: Connection closed by invalid user ubuntu 92.118.39.95 port 55562 [preauth]
Jan 22 14:31:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:24.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:25 compute-2 ceph-mon[77081]: pgmap v1900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:25 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:25 compute-2 nova_compute[226433]: 2026-01-22 14:31:25.067 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:25 compute-2 nova_compute[226433]: 2026-01-22 14:31:25.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:25.256+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:26 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:26.252+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:27 compute-2 ceph-mon[77081]: pgmap v1901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:27 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:27.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:28 compute-2 podman[250250]: 2026-01-22 14:31:28.058442255 +0000 UTC m=+0.109381520 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:31:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:28.305+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:28.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:29 compute-2 ceph-mon[77081]: pgmap v1902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:29 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:29 compute-2 ovn_controller[133156]: 2026-01-22T14:31:29Z|00056|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 22 14:31:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:29.334+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:30 compute-2 nova_compute[226433]: 2026-01-22 14:31:30.069 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:30 compute-2 nova_compute[226433]: 2026-01-22 14:31:30.074 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:30 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:30.324+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:30.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:31 compute-2 ceph-mon[77081]: pgmap v1903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:31 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:31:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:31.310+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:31:32 compute-2 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 14:31:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:32.359+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:33 compute-2 ceph-mon[77081]: pgmap v1904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:33 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:33 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:33.347+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:34 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:34.354+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:34.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:35 compute-2 nova_compute[226433]: 2026-01-22 14:31:35.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:35 compute-2 nova_compute[226433]: 2026-01-22 14:31:35.075 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:35 compute-2 ceph-mon[77081]: pgmap v1905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:35 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:35.318+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:36 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:36.356+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:36.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:36.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:37 compute-2 ceph-mon[77081]: pgmap v1906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:37 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.292379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297292509, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2450, "num_deletes": 251, "total_data_size": 4674035, "memory_usage": 4731856, "flush_reason": "Manual Compaction"}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297316382, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 3057164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53662, "largest_seqno": 56107, "table_properties": {"data_size": 3048175, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23266, "raw_average_key_size": 21, "raw_value_size": 3028198, "raw_average_value_size": 2778, "num_data_blocks": 222, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092130, "oldest_key_time": 1769092130, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 24133 microseconds, and 13205 cpu microseconds.
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.316530) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 3057164 bytes OK
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.316660) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.319011) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.319036) EVENT_LOG_v1 {"time_micros": 1769092297319028, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.319060) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 4662958, prev total WAL file size 4662958, number of live WAL files 2.
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(2985KB)], [105(9846KB)]
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297321915, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13140142, "oldest_snapshot_seqno": -1}
Jan 22 14:31:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:37.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 10189 keys, 11563970 bytes, temperature: kUnknown
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297421109, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 11563970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11504562, "index_size": 32800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 273383, "raw_average_key_size": 26, "raw_value_size": 11327596, "raw_average_value_size": 1111, "num_data_blocks": 1246, "num_entries": 10189, "num_filter_entries": 10189, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.421427) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11563970 bytes
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.422847) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 116.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 10704, records dropped: 515 output_compression: NoCompression
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.422865) EVENT_LOG_v1 {"time_micros": 1769092297422857, "job": 66, "event": "compaction_finished", "compaction_time_micros": 99264, "compaction_time_cpu_micros": 51502, "output_level": 6, "num_output_files": 1, "total_output_size": 11563970, "num_input_records": 10704, "num_output_records": 10189, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297423607, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297425755, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:38 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:38 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:38.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:38.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:39 compute-2 podman[250274]: 2026-01-22 14:31:39.063859632 +0000 UTC m=+0.114167473 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:31:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:39 compute-2 sudo[250300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:39 compute-2 sudo[250300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:39 compute-2 sudo[250300]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:39 compute-2 sudo[250325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:39 compute-2 sudo[250325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:39 compute-2 sudo[250325]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:39.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:39 compute-2 ceph-mon[77081]: pgmap v1907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:39 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:40 compute-2 nova_compute[226433]: 2026-01-22 14:31:40.076 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:31:40 compute-2 nova_compute[226433]: 2026-01-22 14:31:40.077 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:40 compute-2 nova_compute[226433]: 2026-01-22 14:31:40.078 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:31:40 compute-2 nova_compute[226433]: 2026-01-22 14:31:40.078 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:31:40 compute-2 nova_compute[226433]: 2026-01-22 14:31:40.079 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:31:40 compute-2 nova_compute[226433]: 2026-01-22 14:31:40.081 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:31:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:40.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:40 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:40 compute-2 sudo[250350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:40 compute-2 sudo[250350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:40 compute-2 sudo[250350]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:40 compute-2 sudo[250376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:31:40 compute-2 sudo[250376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:40 compute-2 sudo[250376]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:40 compute-2 sudo[250401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:40 compute-2 sudo[250401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:40 compute-2 sudo[250401]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:40.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:40 compute-2 sudo[250426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:31:40 compute-2 sudo[250426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:41 compute-2 sudo[250426]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:41.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:41 compute-2 ceph-mon[77081]: pgmap v1908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:41 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:42.289+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:42 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:31:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.464631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302464727, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 258, "total_data_size": 193899, "memory_usage": 201976, "flush_reason": "Manual Compaction"}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302468626, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 127264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56112, "largest_seqno": 56456, "table_properties": {"data_size": 125158, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5510, "raw_average_key_size": 18, "raw_value_size": 120743, "raw_average_value_size": 397, "num_data_blocks": 12, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092298, "oldest_key_time": 1769092298, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 4024 microseconds, and 1681 cpu microseconds.
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.468672) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 127264 bytes OK
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.468700) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.470763) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.470786) EVENT_LOG_v1 {"time_micros": 1769092302470780, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.470814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 191450, prev total WAL file size 191450, number of live WAL files 2.
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.471367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323630' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(124KB)], [108(11MB)]
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302471479, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 11691234, "oldest_snapshot_seqno": -1}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 9966 keys, 11552162 bytes, temperature: kUnknown
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302542141, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11552162, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493797, "index_size": 32333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 269754, "raw_average_key_size": 27, "raw_value_size": 11320164, "raw_average_value_size": 1135, "num_data_blocks": 1223, "num_entries": 9966, "num_filter_entries": 9966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.543603) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11552162 bytes
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.545531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(182.6) write-amplify(90.8) OK, records in: 10493, records dropped: 527 output_compression: NoCompression
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.545590) EVENT_LOG_v1 {"time_micros": 1769092302545565, "job": 68, "event": "compaction_finished", "compaction_time_micros": 70750, "compaction_time_cpu_micros": 33276, "output_level": 6, "num_output_files": 1, "total_output_size": 11552162, "num_input_records": 10493, "num_output_records": 9966, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302545881, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302550143, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.471178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:31:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:42.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:43.260+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:43 compute-2 ceph-mon[77081]: pgmap v1909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:43 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:43 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:44.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:44 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:31:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:44.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:31:45 compute-2 nova_compute[226433]: 2026-01-22 14:31:45.083 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:31:45 compute-2 nova_compute[226433]: 2026-01-22 14:31:45.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:31:45 compute-2 nova_compute[226433]: 2026-01-22 14:31:45.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:31:45 compute-2 nova_compute[226433]: 2026-01-22 14:31:45.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:31:45 compute-2 nova_compute[226433]: 2026-01-22 14:31:45.118 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:45 compute-2 nova_compute[226433]: 2026-01-22 14:31:45.118 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:31:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:45.338+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:45 compute-2 ceph-mon[77081]: pgmap v1910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:45 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:46.355+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:46 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:46.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:46.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:31:47.207 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:31:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:31:47.207 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:31:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:31:47.207 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:31:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:47.308+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:47 compute-2 ceph-mon[77081]: pgmap v1911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:47 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:31:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:31:47 compute-2 sudo[250485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:47 compute-2 sudo[250485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:47 compute-2 sudo[250485]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:47 compute-2 sudo[250510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:31:47 compute-2 sudo[250510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:47 compute-2 sudo[250510]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:48.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:48 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:48 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:48 compute-2 ceph-mon[77081]: pgmap v1912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:48.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:48.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:49.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:49 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:50 compute-2 nova_compute[226433]: 2026-01-22 14:31:50.119 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:50 compute-2 nova_compute[226433]: 2026-01-22 14:31:50.120 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:31:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:50.281+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:50 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:50 compute-2 ceph-mon[77081]: pgmap v1913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:31:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:51.313+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:51 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:51 compute-2 nova_compute[226433]: 2026-01-22 14:31:51.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:52.347+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:52 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:52 compute-2 ceph-mon[77081]: pgmap v1914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:52.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:53.326+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:53 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:53 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:31:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:54.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:54 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:54 compute-2 ceph-mon[77081]: pgmap v1915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:55 compute-2 nova_compute[226433]: 2026-01-22 14:31:55.122 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:31:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:55.321+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:55 compute-2 nova_compute[226433]: 2026-01-22 14:31:55.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:55 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:55 compute-2 nova_compute[226433]: 2026-01-22 14:31:55.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:56.344+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:56 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:56 compute-2 ceph-mon[77081]: pgmap v1916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:56.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:57.322+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:57 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:58.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:58 compute-2 nova_compute[226433]: 2026-01-22 14:31:58.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:58 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:58 compute-2 ceph-mon[77081]: pgmap v1917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:31:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:31:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:58.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:31:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:31:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:31:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:31:59 compute-2 podman[250541]: 2026-01-22 14:31:59.034623676 +0000 UTC m=+0.082433781 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:31:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:31:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:59.356+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:31:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:59 compute-2 sudo[250561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:59 compute-2 sudo[250561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:59 compute-2 sudo[250561]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:59 compute-2 sudo[250586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:31:59 compute-2 sudo[250586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:31:59 compute-2 sudo[250586]: pam_unix(sudo:session): session closed for user root
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.561 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.562 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.563 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.563 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.564 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.564 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:31:59 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.723 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.723 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.724 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:31:59 compute-2 nova_compute[226433]: 2026-01-22 14:31:59.724 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:32:00 compute-2 nova_compute[226433]: 2026-01-22 14:32:00.123 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:00 compute-2 nova_compute[226433]: 2026-01-22 14:32:00.217 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:32:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:00.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:00 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:00 compute-2 ceph-mon[77081]: pgmap v1918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:32:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:00.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.137 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.252 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.252 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.253 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.253 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.254 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.254 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:32:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:01.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:01 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.698 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.699 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.700 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.700 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:32:01 compute-2 nova_compute[226433]: 2026-01-22 14:32:01.701 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:32:02 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4247879608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.141 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.315 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.316 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.320 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.320 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:32:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:02.336+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.555 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.557 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4330MB free_disk=20.73322296142578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.558 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.558 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:02 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:02 compute-2 ceph-mon[77081]: pgmap v1919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 14:32:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4247879608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:02 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.781 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.782 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.782 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.783 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.783 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.783 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.784 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.784 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.785 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=20GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.809 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.835 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.835 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 22 14:32:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:02.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.851 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 22 14:32:02 compute-2 nova_compute[226433]: 2026-01-22 14:32:02.873 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 22 14:32:03 compute-2 nova_compute[226433]: 2026-01-22 14:32:03.036 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:03.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:32:03 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1801557705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:03 compute-2 nova_compute[226433]: 2026-01-22 14:32:03.488 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:03 compute-2 nova_compute[226433]: 2026-01-22 14:32:03.497 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:32:03 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:03 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1801557705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:03 compute-2 nova_compute[226433]: 2026-01-22 14:32:03.649 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:32:03 compute-2 nova_compute[226433]: 2026-01-22 14:32:03.652 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:32:03 compute-2 nova_compute[226433]: 2026-01-22 14:32:03.653 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:04.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:04 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:04 compute-2 ceph-mon[77081]: pgmap v1920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:04.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:05 compute-2 nova_compute[226433]: 2026-01-22 14:32:05.126 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:05.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:05 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:06.363+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:06 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:06 compute-2 ceph-mon[77081]: pgmap v1921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:06.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:07.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:07 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:07 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:08.406+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:08 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:08 compute-2 ceph-mon[77081]: pgmap v1922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:08.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:08.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:09.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:09 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:10 compute-2 podman[250660]: 2026-01-22 14:32:10.09729533 +0000 UTC m=+0.144210473 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:32:10 compute-2 nova_compute[226433]: 2026-01-22 14:32:10.127 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:10 compute-2 nova_compute[226433]: 2026-01-22 14:32:10.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:10 compute-2 nova_compute[226433]: 2026-01-22 14:32:10.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:32:10 compute-2 nova_compute[226433]: 2026-01-22 14:32:10.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:10 compute-2 nova_compute[226433]: 2026-01-22 14:32:10.130 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:10 compute-2 nova_compute[226433]: 2026-01-22 14:32:10.130 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:10.352+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:10 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:10 compute-2 ceph-mon[77081]: pgmap v1923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:10.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:10.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:11.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:11 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:12.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:12 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:12 compute-2 ceph-mon[77081]: pgmap v1924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:12.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:12.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:13.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:13 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:13 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:14.331+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:14 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:14 compute-2 ceph-mon[77081]: pgmap v1925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:14.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:15 compute-2 nova_compute[226433]: 2026-01-22 14:32:15.131 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:15 compute-2 nova_compute[226433]: 2026-01-22 14:32:15.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:15 compute-2 nova_compute[226433]: 2026-01-22 14:32:15.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:32:15 compute-2 nova_compute[226433]: 2026-01-22 14:32:15.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:15 compute-2 nova_compute[226433]: 2026-01-22 14:32:15.150 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:15 compute-2 nova_compute[226433]: 2026-01-22 14:32:15.151 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:15.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:15 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:16.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:16 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:16 compute-2 ceph-mon[77081]: pgmap v1926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:16.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:17.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:17 compute-2 nova_compute[226433]: 2026-01-22 14:32:17.649 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:17 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:32:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:32:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:32:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:32:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:18.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:18 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:18 compute-2 ceph-mon[77081]: pgmap v1927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:32:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:32:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:19.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:19 compute-2 sudo[250692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:19 compute-2 sudo[250692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:19 compute-2 sudo[250692]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:19 compute-2 sudo[250717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:19 compute-2 sudo[250717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:19 compute-2 sudo[250717]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:19 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:20 compute-2 nova_compute[226433]: 2026-01-22 14:32:20.151 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:20 compute-2 nova_compute[226433]: 2026-01-22 14:32:20.153 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:20 compute-2 nova_compute[226433]: 2026-01-22 14:32:20.154 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:32:20 compute-2 nova_compute[226433]: 2026-01-22 14:32:20.154 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:20 compute-2 nova_compute[226433]: 2026-01-22 14:32:20.187 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:20 compute-2 nova_compute[226433]: 2026-01-22 14:32:20.187 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:20.406+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:20.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:20 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:20 compute-2 ceph-mon[77081]: pgmap v1928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:21.428+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:22 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:22 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:22.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:22.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:23 compute-2 ceph-mon[77081]: pgmap v1929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:23 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:23 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:23.438+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:24 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:24.483+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:25 compute-2 ceph-mon[77081]: pgmap v1930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:25 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:25 compute-2 nova_compute[226433]: 2026-01-22 14:32:25.189 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:25 compute-2 nova_compute[226433]: 2026-01-22 14:32:25.191 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:25.452+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:26 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:26.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:26.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:27 compute-2 ceph-mon[77081]: pgmap v1931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:27 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:27.374+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.870 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.870 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.885 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.954 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.954 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.960 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:32:27 compute-2 nova_compute[226433]: 2026-01-22 14:32:27.960 226437 INFO nova.compute.claims [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:32:28 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:28 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.225 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:28.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:32:28 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3871583424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.641 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.646 226437 DEBUG nova.compute.provider_tree [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.660 226437 DEBUG nova.scheduler.client.report [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.680 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.681 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.724 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.724 226437 DEBUG nova.network.neutron [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.750 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.775 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:32:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 14:32:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:28 compute-2 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.884 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.885 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.885 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating image(s)
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.915 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.944 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.973 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:28 compute-2 nova_compute[226433]: 2026-01-22 14:32:28.978 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.044 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.046 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.046 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.047 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.076 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.081 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:29 compute-2 ceph-mon[77081]: pgmap v1932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:29 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:29 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3871583424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.238 226437 DEBUG nova.network.neutron [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.239 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.357 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:29.396+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.452 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] resizing rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.566 226437 DEBUG nova.objects.instance [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'migration_context' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.588 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.588 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Ensure instance console log exists: /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.589 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.590 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.590 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.592 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.596 226437 WARNING nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.602 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.604 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.609 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.610 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.613 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.613 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T14:32:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25a58c00-ff14-4ac2-b88f-b2e5060d0aa8',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-144408879',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.614 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.614 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.616 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.616 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.616 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.617 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:32:29 compute-2 nova_compute[226433]: 2026-01-22 14:32:29.620 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:30 compute-2 podman[250955]: 2026-01-22 14:32:30.026754315 +0000 UTC m=+0.078605511 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:32:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:32:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1888876612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.059 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.088 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.092 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:30 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1888876612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.191 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:30.429+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:32:30 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2091226058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.573 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.575 226437 DEBUG nova.objects.instance [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'pci_devices' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.599 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <uuid>33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</uuid>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <name>instance-00000015</name>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <memory>131072</memory>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:name>tempest-MigrationsAdminTest-server-685681022</nova:name>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:32:29</nova:creationTime>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:flavor name="tempest-test_resize_flavor_-144408879">
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:memory>128</nova:memory>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:user uuid="549def9aedaa41be8d41ae7c6e534303">tempest-MigrationsAdminTest-775661994-project-member</nova:user>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <nova:project uuid="98a3ce5a8a524b0d8327784d9df9a9db">tempest-MigrationsAdminTest-775661994</nova:project>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <nova:ports/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <system>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <entry name="serial">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <entry name="uuid">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </system>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <os>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </os>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <features>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </features>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk">
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       </source>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <target dev="vda" bus="virtio"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config">
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       </source>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:32:30 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <target dev="sda" bus="sata"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log" append="off"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <video>
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </video>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:32:30 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:32:30 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:32:30 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:32:30 compute-2 nova_compute[226433]: </domain>
Jan 22 14:32:30 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.650 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.650 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.651 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Using config drive
Jan 22 14:32:30 compute-2 nova_compute[226433]: 2026-01-22 14:32:30.870 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 14:32:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:30 compute-2 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.148 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating config drive at /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.154 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78vci41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:31 compute-2 ceph-mon[77081]: pgmap v1933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 601 MiB data, 525 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:32:31 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2091226058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.281 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78vci41" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.311 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.316 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:31.469+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.478 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:31 compute-2 nova_compute[226433]: 2026-01-22 14:32:31.479 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Deleting local config drive /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config because it was imported into RBD.
Jan 22 14:32:31 compute-2 systemd-machined[194970]: New machine qemu-5-instance-00000015.
Jan 22 14:32:31 compute-2 systemd[1]: Started Virtual Machine qemu-5-instance-00000015.
Jan 22 14:32:32 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:32.506+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.581 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092352.580007, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.581 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Resumed (Lifecycle Event)
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.584 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.585 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.588 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance spawned successfully.
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.589 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.612 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.620 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.621 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.622 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.622 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.623 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.623 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.629 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.664 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.665 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092352.5834947, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.665 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Started (Lifecycle Event)
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.687 226437 INFO nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Took 3.80 seconds to spawn the instance on the hypervisor.
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.688 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.690 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.699 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.763 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.791 226437 INFO nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Took 4.86 seconds to build instance.
Jan 22 14:32:32 compute-2 nova_compute[226433]: 2026-01-22 14:32:32.808 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 14:32:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:32 compute-2 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:33 compute-2 ceph-mon[77081]: pgmap v1934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 633 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 22 14:32:33 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:33 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:33.536+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:34 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:34.502+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:32:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:32:35 compute-2 nova_compute[226433]: 2026-01-22 14:32:35.194 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:35 compute-2 nova_compute[226433]: 2026-01-22 14:32:35.196 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:35 compute-2 ceph-mon[77081]: pgmap v1935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 633 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 22 14:32:35 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:35.468+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:36 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:36.484+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:37 compute-2 ceph-mon[77081]: pgmap v1936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:37 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:37.533+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:38 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:38 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:38 compute-2 ceph-mon[77081]: pgmap v1937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:38.493+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:38 compute-2 nova_compute[226433]: 2026-01-22 14:32:38.936 226437 DEBUG nova.compute.manager [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.033 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.034 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.074 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'pci_requests' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.095 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.096 226437 INFO nova.compute.claims [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.096 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'resources' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.113 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'pci_devices' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.178 226437 INFO nova.compute.resource_tracker [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating resource usage from migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0
Jan 22 14:32:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.379 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:32:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:39.454+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:39 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:32:39 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3676994071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.841 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.851 226437 DEBUG nova.compute.provider_tree [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:32:39 compute-2 sudo[251156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:39 compute-2 sudo[251156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:39 compute-2 sudo[251156]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.878 226437 DEBUG nova.scheduler.client.report [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.928 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.929 226437 INFO nova.compute.manager [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Migrating
Jan 22 14:32:39 compute-2 sudo[251183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:39 compute-2 sudo[251183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:39 compute-2 sudo[251183]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.977 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.977 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:32:39 compute-2 nova_compute[226433]: 2026-01-22 14:32:39.978 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:32:40 compute-2 nova_compute[226433]: 2026-01-22 14:32:40.184 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:32:40 compute-2 nova_compute[226433]: 2026-01-22 14:32:40.197 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:40.416+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:40 compute-2 nova_compute[226433]: 2026-01-22 14:32:40.546 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:32:40 compute-2 nova_compute[226433]: 2026-01-22 14:32:40.569 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:32:40 compute-2 nova_compute[226433]: 2026-01-22 14:32:40.662 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 22 14:32:40 compute-2 nova_compute[226433]: 2026-01-22 14:32:40.666 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 22 14:32:40 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3676994071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:32:40 compute-2 ceph-mon[77081]: pgmap v1938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:32:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:32:41 compute-2 podman[251209]: 2026-01-22 14:32:41.131598267 +0000 UTC m=+0.179978263 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:32:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:41.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:41 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:42.426+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:42 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:42 compute-2 ceph-mon[77081]: pgmap v1939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 22 14:32:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:43.440+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:43 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:43 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:44.471+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:32:44 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:44 compute-2 ceph-mon[77081]: pgmap v1940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 648 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 76 op/s
Jan 22 14:32:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:44.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:45 compute-2 nova_compute[226433]: 2026-01-22 14:32:45.198 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:45 compute-2 nova_compute[226433]: 2026-01-22 14:32:45.199 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:45 compute-2 nova_compute[226433]: 2026-01-22 14:32:45.199 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:32:45 compute-2 nova_compute[226433]: 2026-01-22 14:32:45.200 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:45 compute-2 nova_compute[226433]: 2026-01-22 14:32:45.200 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:45 compute-2 nova_compute[226433]: 2026-01-22 14:32:45.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:45.471+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:45 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 14:32:45 compute-2 sshd-session[251237]: Invalid user ubuntu from 45.148.10.240 port 58388
Jan 22 14:32:46 compute-2 sshd-session[251237]: Connection closed by invalid user ubuntu 45.148.10.240 port 58388 [preauth]
Jan 22 14:32:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:46.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:46.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:46 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:32:46 compute-2 ceph-mon[77081]: pgmap v1941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 95 op/s
Jan 22 14:32:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:32:47.208 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:32:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:32:47.208 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:32:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:32:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:32:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:47.541+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:47 compute-2 sudo[251240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:47 compute-2 sudo[251240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-2 sudo[251240]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:47 compute-2 sudo[251265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:32:47 compute-2 sudo[251265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-2 sudo[251265]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:47 compute-2 sudo[251290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:47 compute-2 sudo[251290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-2 sudo[251290]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:47 compute-2 sudo[251315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:32:47 compute-2 sudo[251315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:47 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:48 compute-2 sudo[251315]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:48.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:48 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:48 compute-2 ceph-mon[77081]: pgmap v1942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Jan 22 14:32:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:32:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:32:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:32:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:32:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:32:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:32:48 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:49.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:49 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:50 compute-2 nova_compute[226433]: 2026-01-22 14:32:50.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:50.505+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:50 compute-2 nova_compute[226433]: 2026-01-22 14:32:50.718 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 22 14:32:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:50.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:50 compute-2 ceph-mon[77081]: pgmap v1943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 675 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 22 14:32:50 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:51.460+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:51 compute-2 nova_compute[226433]: 2026-01-22 14:32:51.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:51 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:52.506+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:32:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:32:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:52 compute-2 ceph-mon[77081]: pgmap v1944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 14:32:52 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 3357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:52 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:53.541+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:54 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:54 compute-2 sudo[251376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:32:54 compute-2 sudo[251376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:54 compute-2 sudo[251376]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:54.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:54 compute-2 sudo[251401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:32:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:54 compute-2 sudo[251401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:32:54 compute-2 sudo[251401]: pam_unix(sudo:session): session closed for user root
Jan 22 14:32:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:54.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:55 compute-2 ceph-mon[77081]: pgmap v1945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 14:32:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:32:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:32:55 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:55 compute-2 nova_compute[226433]: 2026-01-22 14:32:55.203 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:32:55 compute-2 nova_compute[226433]: 2026-01-22 14:32:55.204 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:55 compute-2 nova_compute[226433]: 2026-01-22 14:32:55.204 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:32:55 compute-2 nova_compute[226433]: 2026-01-22 14:32:55.204 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:55 compute-2 nova_compute[226433]: 2026-01-22 14:32:55.205 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:32:55 compute-2 nova_compute[226433]: 2026-01-22 14:32:55.206 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:32:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:55.549+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:56 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:56 compute-2 nova_compute[226433]: 2026-01-22 14:32:56.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:56.549+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:56.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:57 compute-2 ceph-mon[77081]: pgmap v1946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 22 14:32:57 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.483246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377483281, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 252, "total_data_size": 2149753, "memory_usage": 2179112, "flush_reason": "Manual Compaction"}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377490054, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 920251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56461, "largest_seqno": 57717, "table_properties": {"data_size": 916035, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13120, "raw_average_key_size": 21, "raw_value_size": 906101, "raw_average_value_size": 1487, "num_data_blocks": 70, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092303, "oldest_key_time": 1769092303, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 6834 microseconds, and 3091 cpu microseconds.
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.490085) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 920251 bytes OK
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.490100) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492394) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492406) EVENT_LOG_v1 {"time_micros": 1769092377492403, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492423) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2143590, prev total WAL file size 2143590, number of live WAL files 2.
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.493087) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373538' seq:0, type:0; will stop at (end)
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(898KB)], [111(11MB)]
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377493118, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12472413, "oldest_snapshot_seqno": -1}
Jan 22 14:32:57 compute-2 nova_compute[226433]: 2026-01-22 14:32:57.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 10089 keys, 9037738 bytes, temperature: kUnknown
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377544652, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 9037738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8982706, "index_size": 28680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 272986, "raw_average_key_size": 27, "raw_value_size": 8811056, "raw_average_value_size": 873, "num_data_blocks": 1070, "num_entries": 10089, "num_filter_entries": 10089, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.544877) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9037738 bytes
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.545998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.7 rd, 175.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.4) write-amplify(9.8) OK, records in: 10575, records dropped: 486 output_compression: NoCompression
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.546025) EVENT_LOG_v1 {"time_micros": 1769092377546011, "job": 70, "event": "compaction_finished", "compaction_time_micros": 51610, "compaction_time_cpu_micros": 23250, "output_level": 6, "num_output_files": 1, "total_output_size": 9037738, "num_input_records": 10575, "num_output_records": 10089, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377546415, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377549223, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:32:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:57.552+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:58 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:32:58 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:58.584+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:32:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:32:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:32:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:32:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:32:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:32:59 compute-2 ceph-mon[77081]: pgmap v1947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 574 KiB/s wr, 35 op/s
Jan 22 14:32:59 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.547 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 22 14:32:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:59.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:59 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:32:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.772 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.774 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.775 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:32:59 compute-2 nova_compute[226433]: 2026-01-22 14:32:59.776 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.012 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:33:00 compute-2 sudo[251429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:00 compute-2 sudo[251429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:00 compute-2 sudo[251429]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:00 compute-2 sudo[251454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:00 compute-2 sudo[251454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:00 compute-2 sudo[251454]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.203 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.206 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:00 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.221 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.222 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.223 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:00 compute-2 podman[251478]: 2026-01-22 14:33:00.247641515 +0000 UTC m=+0.092571963 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:00 compute-2 nova_compute[226433]: 2026-01-22 14:33:00.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:00.604+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:00 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:00.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:01 compute-2 ceph-mon[77081]: pgmap v1948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 574 KiB/s wr, 35 op/s
Jan 22 14:33:01 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:01 compute-2 nova_compute[226433]: 2026-01-22 14:33:01.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:01 compute-2 nova_compute[226433]: 2026-01-22 14:33:01.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:33:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:01.576+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:01 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:01 compute-2 nova_compute[226433]: 2026-01-22 14:33:01.764 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 22 14:33:02 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:02.573+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:02 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:03 compute-2 ceph-mon[77081]: pgmap v1949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 56 KiB/s wr, 7 op/s
Jan 22 14:33:03 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:03 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.543 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.543 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.544 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.544 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.544 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:33:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:03.614+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:03 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.746 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.747 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.747 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.748 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.748 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.751 226437 INFO nova.compute.manager [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Terminating instance
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.752 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.753 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:33:03 compute-2 nova_compute[226433]: 2026-01-22 14:33:03.753 226437 DEBUG nova.network.neutron [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:33:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:33:03 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433948236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.011 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.090 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.090 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.093 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.093 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.096 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.096 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.159 226437 DEBUG nova.network.neutron [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:33:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.248 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.249 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4142MB free_disk=20.68789291381836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.249 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.249 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.322 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Applying migration context for instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 as it has an incoming, in-progress migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0. Migration status is migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.323 226437 INFO nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating resource usage from migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.353 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.353 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 9 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1728MB phys_disk=20GB used_disk=9GB total_vcpus=8 used_vcpus=9 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:33:04 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:04 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/433948236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.508 226437 DEBUG nova.network.neutron [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.530 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.531 226437 DEBUG nova.compute.manager [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 22 14:33:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:04.566+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:04 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:04 compute-2 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 22 14:33:04 compute-2 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000d.scope: Consumed 41.112s CPU time.
Jan 22 14:33:04 compute-2 systemd-machined[194970]: Machine qemu-3-instance-0000000d terminated.
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.689 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.756 226437 INFO nova.virt.libvirt.driver [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance destroyed successfully.
Jan 22 14:33:04 compute-2 nova_compute[226433]: 2026-01-22 14:33:04.756 226437 DEBUG nova.objects.instance [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'resources' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:33:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:04.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:33:05 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/33695955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.136 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.144 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.170 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.192 226437 INFO nova.virt.libvirt.driver [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deleting instance files /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871_del
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.192 226437 INFO nova.virt.libvirt.driver [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deletion of /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871_del complete
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.196 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.196 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.208 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.245 226437 INFO nova.compute.manager [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.246 226437 DEBUG oslo.service.loopingcall [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.247 226437 DEBUG nova.compute.manager [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.247 226437 DEBUG nova.network.neutron [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 22 14:33:05 compute-2 ceph-mon[77081]: pgmap v1950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 571 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:05 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:05 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/33695955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.404 226437 DEBUG nova.network.neutron [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.418 226437 DEBUG nova.network.neutron [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.434 226437 INFO nova.compute.manager [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 0.19 seconds to deallocate network for instance.
Jan 22 14:33:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:05.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:05 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.553 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.553 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:05 compute-2 nova_compute[226433]: 2026-01-22 14:33:05.786 226437 DEBUG oslo_concurrency.processutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:33:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:33:06 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1940242111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:06 compute-2 nova_compute[226433]: 2026-01-22 14:33:06.203 226437 DEBUG oslo_concurrency.processutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:33:06 compute-2 nova_compute[226433]: 2026-01-22 14:33:06.210 226437 DEBUG nova.compute.provider_tree [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:33:06 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:06 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1940242111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:33:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:06.531+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:06 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:06 compute-2 nova_compute[226433]: 2026-01-22 14:33:06.550 226437 DEBUG nova.scheduler.client.report [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:33:06 compute-2 nova_compute[226433]: 2026-01-22 14:33:06.581 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:06 compute-2 nova_compute[226433]: 2026-01-22 14:33:06.624 226437 INFO nova.scheduler.client.report [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Deleted allocations for instance 8e98e700-52a4-44ff-8e11-9404cd11d871
Jan 22 14:33:06 compute-2 nova_compute[226433]: 2026-01-22 14:33:06.705 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:07 compute-2 ceph-mon[77081]: pgmap v1951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 627 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 682 B/s wr, 9 op/s
Jan 22 14:33:07 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:07.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:07 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:08 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:08 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:08.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:08 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:08.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:33:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:08.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:33:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:09 compute-2 ceph-mon[77081]: pgmap v1952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 627 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 682 B/s wr, 9 op/s
Jan 22 14:33:09 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:33:09.438 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:33:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:33:09.439 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:33:09 compute-2 nova_compute[226433]: 2026-01-22 14:33:09.440 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:33:09.440 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:33:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:09.598+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:09 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:10 compute-2 nova_compute[226433]: 2026-01-22 14:33:10.209 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:10 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:10 compute-2 ceph-mon[77081]: pgmap v1953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:10.558+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:10 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:10.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:11 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:11.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:11 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:12 compute-2 podman[251593]: 2026-01-22 14:33:12.072339107 +0000 UTC m=+0.126390900 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 14:33:12 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:12 compute-2 ceph-mon[77081]: pgmap v1954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:12.568+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:12 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:12 compute-2 nova_compute[226433]: 2026-01-22 14:33:12.844 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 22 14:33:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:12.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:13 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:13 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:13.611+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:13 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:14.565+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:14 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:14 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:14 compute-2 ceph-mon[77081]: pgmap v1955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:14.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:15 compute-2 nova_compute[226433]: 2026-01-22 14:33:15.212 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:15 compute-2 nova_compute[226433]: 2026-01-22 14:33:15.214 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:15 compute-2 nova_compute[226433]: 2026-01-22 14:33:15.214 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:33:15 compute-2 nova_compute[226433]: 2026-01-22 14:33:15.214 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:33:15 compute-2 nova_compute[226433]: 2026-01-22 14:33:15.219 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:15 compute-2 nova_compute[226433]: 2026-01-22 14:33:15.219 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:33:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:15.569+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:15 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:15 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:16.546+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:16 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:16 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 14:33:16 compute-2 ceph-mon[77081]: pgmap v1956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 22 14:33:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:16.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:16.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:17.540+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:17 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:17 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:17 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:18.535+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:18 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:18 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:18 compute-2 ceph-mon[77081]: pgmap v1957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Jan 22 14:33:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1972462447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:33:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1972462447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:33:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:19.523+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:19 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:19 compute-2 nova_compute[226433]: 2026-01-22 14:33:19.754 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092384.7511826, 8e98e700-52a4-44ff-8e11-9404cd11d871 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:33:19 compute-2 nova_compute[226433]: 2026-01-22 14:33:19.754 226437 INFO nova.compute.manager [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] VM Stopped (Lifecycle Event)
Jan 22 14:33:19 compute-2 nova_compute[226433]: 2026-01-22 14:33:19.795 226437 DEBUG nova.compute.manager [None req-70e5a390-06c0-4aeb-b707-d4a109a305fd - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:33:19 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:20 compute-2 nova_compute[226433]: 2026-01-22 14:33:20.220 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:20 compute-2 sudo[251623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:20 compute-2 sudo[251623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:20 compute-2 sudo[251623]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:20 compute-2 sudo[251648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:20 compute-2 sudo[251648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:20 compute-2 sudo[251648]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:20 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:20.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:20 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:20 compute-2 ceph-mon[77081]: pgmap v1958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Jan 22 14:33:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:20.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:21.463+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:21 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:21 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:22.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:22 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:22 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:22 compute-2 ceph-mon[77081]: pgmap v1959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:22.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:23.543+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:23 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:23 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:23 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:23 compute-2 nova_compute[226433]: 2026-01-22 14:33:23.899 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 22 14:33:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:24.517+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:24 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:24 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:24 compute-2 ceph-mon[77081]: pgmap v1960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:24.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:25 compute-2 nova_compute[226433]: 2026-01-22 14:33:25.221 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:25 compute-2 nova_compute[226433]: 2026-01-22 14:33:25.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:25.492+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:25 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:25 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:26.481+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:26 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:26 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:26 compute-2 ceph-mon[77081]: pgmap v1961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:26.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:27.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:27 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:27 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:28.537+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:28 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-2 ceph-mon[77081]: pgmap v1962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:28 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:28.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:29.582+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:29 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:29 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:30 compute-2 nova_compute[226433]: 2026-01-22 14:33:30.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:30.607+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:30 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:30 compute-2 ceph-mon[77081]: pgmap v1963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:30 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:30 compute-2 podman[251679]: 2026-01-22 14:33:30.999126411 +0000 UTC m=+0.057049199 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:33:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:31.570+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:31 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:31 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:32.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:32 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:32.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:32 compute-2 ceph-mon[77081]: pgmap v1964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:32 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:32 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:33.603+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:33 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:34 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:34.644+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:34 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:34 compute-2 nova_compute[226433]: 2026-01-22 14:33:34.949 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 22 14:33:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:35 compute-2 ceph-mon[77081]: pgmap v1965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:35 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:35 compute-2 nova_compute[226433]: 2026-01-22 14:33:35.224 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:35 compute-2 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:35 compute-2 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:33:35 compute-2 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:33:35 compute-2 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:33:35 compute-2 nova_compute[226433]: 2026-01-22 14:33:35.227 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:35.644+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:35 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:36 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:36.693+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:36 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:37 compute-2 ceph-mon[77081]: pgmap v1966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:37 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:37 compute-2 sshd-session[251701]: Invalid user ubuntu from 92.118.39.95 port 34536
Jan 22 14:33:37 compute-2 sshd-session[251701]: Connection closed by invalid user ubuntu 92.118.39.95 port 34536 [preauth]
Jan 22 14:33:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:37.726+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:37 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:38 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:38 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:38.728+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:38 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:39 compute-2 ceph-mon[77081]: pgmap v1967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:39 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:39.696+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:39 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:40 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:40 compute-2 nova_compute[226433]: 2026-01-22 14:33:40.228 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:40 compute-2 sudo[251704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:40 compute-2 sudo[251704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:40 compute-2 sudo[251704]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:40 compute-2 sudo[251729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:40 compute-2 sudo[251729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:40 compute-2 sudo[251729]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:40.717+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:40 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:40.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:40 compute-2 nova_compute[226433]: 2026-01-22 14:33:40.978 226437 INFO nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance failed to shutdown in 60 seconds.
Jan 22 14:33:41 compute-2 ceph-mon[77081]: pgmap v1968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:41 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:41.682+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:41 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:42 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:42.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:42 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:42.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:43 compute-2 podman[251756]: 2026-01-22 14:33:43.058134492 +0000 UTC m=+0.116161619 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 14:33:43 compute-2 ceph-mon[77081]: pgmap v1969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:43 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:43 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:43.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:43 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:44 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:44.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:44 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:44.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:45 compute-2 ceph-mon[77081]: pgmap v1970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:45 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:45 compute-2 nova_compute[226433]: 2026-01-22 14:33:45.231 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:45.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:45 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:46 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:46.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:46 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:33:47.208 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:33:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:33:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:47 compute-2 ceph-mon[77081]: pgmap v1971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:47 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:47.725+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:47 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:48 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:48 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:48.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:48 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:48.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:49 compute-2 ceph-mon[77081]: pgmap v1972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:49 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:49.739+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:49 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:50 compute-2 nova_compute[226433]: 2026-01-22 14:33:50.233 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:50 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:50.731+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:50 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:50.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:51 compute-2 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 22 14:33:51 compute-2 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000015.scope: Consumed 16.518s CPU time.
Jan 22 14:33:51 compute-2 systemd-machined[194970]: Machine qemu-5-instance-00000015 terminated.
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.222 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance destroyed successfully.
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.228 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.229 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:33:51 compute-2 ceph-mon[77081]: pgmap v1973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:51 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.368 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.369 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.370 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.655 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.656 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.656 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:33:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:51.778+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:51 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:51 compute-2 nova_compute[226433]: 2026-01-22 14:33:51.935 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:33:52 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:52 compute-2 nova_compute[226433]: 2026-01-22 14:33:52.294 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:33:52 compute-2 nova_compute[226433]: 2026-01-22 14:33:52.318 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:33:52 compute-2 nova_compute[226433]: 2026-01-22 14:33:52.437 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 22 14:33:52 compute-2 nova_compute[226433]: 2026-01-22 14:33:52.439 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 22 14:33:52 compute-2 nova_compute[226433]: 2026-01-22 14:33:52.440 226437 INFO nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating image(s)
Jan 22 14:33:52 compute-2 nova_compute[226433]: 2026-01-22 14:33:52.493 226437 DEBUG nova.storage.rbd_utils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] creating snapshot(nova-resize) on rbd image(33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 22 14:33:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:52.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:52 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:33:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:33:53 compute-2 nova_compute[226433]: 2026-01-22 14:33:53.197 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:53 compute-2 ceph-mon[77081]: pgmap v1974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:33:53 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:53 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:53.766+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:53 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:54 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:54 compute-2 sudo[251823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:54 compute-2 sudo[251823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-2 sudo[251823]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:54 compute-2 sudo[251849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:33:54 compute-2 sudo[251849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-2 sudo[251849]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:54.806+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:54 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:54 compute-2 sudo[251874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:33:54 compute-2 sudo[251874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-2 sudo[251874]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:54 compute-2 sudo[251899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:33:54 compute-2 sudo[251899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:33:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 14:33:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:54.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 14:33:55 compute-2 nova_compute[226433]: 2026-01-22 14:33:55.273 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:33:55 compute-2 nova_compute[226433]: 2026-01-22 14:33:55.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:55 compute-2 nova_compute[226433]: 2026-01-22 14:33:55.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5039 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 22 14:33:55 compute-2 nova_compute[226433]: 2026-01-22 14:33:55.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:33:55 compute-2 nova_compute[226433]: 2026-01-22 14:33:55.275 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 22 14:33:55 compute-2 nova_compute[226433]: 2026-01-22 14:33:55.275 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:33:55 compute-2 ceph-mon[77081]: pgmap v1975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 340 B/s rd, 0 op/s
Jan 22 14:33:55 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:55 compute-2 sudo[251899]: pam_unix(sudo:session): session closed for user root
Jan 22 14:33:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:55.842+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:55 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:56.845+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:56 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:56 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:56 compute-2 ceph-mon[77081]: pgmap v1976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 4 op/s
Jan 22 14:33:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:57.861+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:57 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:57 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:57 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:33:58 compute-2 nova_compute[226433]: 2026-01-22 14:33:58.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:58.894+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:58 compute-2 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:58 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:58 compute-2 ceph-mon[77081]: pgmap v1977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 600 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 4 op/s
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:33:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:33:58 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 22 14:33:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:33:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:33:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:33:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:33:59 compute-2 nova_compute[226433]: 2026-01-22 14:33:59.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:59 compute-2 nova_compute[226433]: 2026-01-22 14:33:59.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:33:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:59.924+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:59 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:33:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:33:59 compute-2 ceph-mon[77081]: osdmap e151: 3 total, 3 up, 3 in
Jan 22 14:33:59 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.277 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.548 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.549 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.549 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 22 14:34:00 compute-2 sudo[251957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:00 compute-2 sudo[251957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:00 compute-2 sudo[251957]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:00 compute-2 sudo[251983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:00 compute-2 sudo[251983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:00 compute-2 sudo[251983]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:00 compute-2 nova_compute[226433]: 2026-01-22 14:34:00.786 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:34:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:00.923+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:00 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:00.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:00 compute-2 ceph-mon[77081]: pgmap v1979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 608 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 824 KiB/s rd, 819 KiB/s wr, 7 op/s
Jan 22 14:34:00 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:01 compute-2 nova_compute[226433]: 2026-01-22 14:34:01.576 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:01 compute-2 nova_compute[226433]: 2026-01-22 14:34:01.599 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:01 compute-2 nova_compute[226433]: 2026-01-22 14:34:01.600 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 22 14:34:01 compute-2 nova_compute[226433]: 2026-01-22 14:34:01.601 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:34:01 compute-2 nova_compute[226433]: 2026-01-22 14:34:01.601 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:34:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:01.968+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:01 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:02 compute-2 podman[252008]: 2026-01-22 14:34:02.048760569 +0000 UTC m=+0.093863646 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:34:02 compute-2 nova_compute[226433]: 2026-01-22 14:34:02.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:34:02 compute-2 nova_compute[226433]: 2026-01-22 14:34:02.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 22 14:34:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:02.925+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:02 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:34:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:02.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:34:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:03 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:03 compute-2 ceph-mon[77081]: pgmap v1980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 33 op/s
Jan 22 14:34:03 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:03 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:03.955+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:03 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:04 compute-2 nova_compute[226433]: 2026-01-22 14:34:04.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:34:04 compute-2 nova_compute[226433]: 2026-01-22 14:34:04.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:04 compute-2 nova_compute[226433]: 2026-01-22 14:34:04.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:04 compute-2 nova_compute[226433]: 2026-01-22 14:34:04.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:04 compute-2 nova_compute[226433]: 2026-01-22 14:34:04.543 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 22 14:34:04 compute-2 nova_compute[226433]: 2026-01-22 14:34:04.543 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:04.929+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:04 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:04.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:34:04 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2051152007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.010 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.090 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.091 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.094 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.094 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 22 14:34:05 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:05 compute-2 ceph-mon[77081]: pgmap v1981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 32 op/s
Jan 22 14:34:05 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2051152007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:34:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.262 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.263 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4568MB free_disk=20.733367919921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.264 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.264 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.278 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.356 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Applying migration context for instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 as it has an incoming, in-progress migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.357 226437 INFO nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating resource usage from migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1600MB phys_disk=20GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 22 14:34:05 compute-2 sudo[252053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:05 compute-2 sudo[252053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:05 compute-2 sudo[252053]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:05 compute-2 sudo[252078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:34:05 compute-2 sudo[252078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:05 compute-2 sudo[252078]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:05 compute-2 nova_compute[226433]: 2026-01-22 14:34:05.679 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:05.974+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:05 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:34:06 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2576899971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.074 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.080 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.100 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.126 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.126 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.219 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092431.2180736, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.219 226437 INFO nova.compute.manager [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Stopped (Lifecycle Event)
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.243 226437 DEBUG nova.compute.manager [None req-49dcdea9-b2d6-4f33-b7a2-4960e03f3053 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.247 226437 DEBUG nova.compute.manager [None req-49dcdea9-b2d6-4f33-b7a2-4960e03f3053 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:34:06 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:06 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2576899971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:06 compute-2 nova_compute[226433]: 2026-01-22 14:34:06.279 226437 INFO nova.compute.manager [None req-49dcdea9-b2d6-4f33-b7a2-4960e03f3053 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 22 14:34:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:06.975+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:06 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.056 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.057 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.088 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.175 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.175 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.182 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.182 226437 INFO nova.compute.claims [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:34:07 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:07 compute-2 ceph-mon[77081]: pgmap v1982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.433 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:34:07 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2163323487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.842 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.848 226437 DEBUG nova.compute.provider_tree [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.869 226437 DEBUG nova.scheduler.client.report [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.893 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.894 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:34:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:07.941+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:07 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.943 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.943 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.976 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names
Jan 22 14:34:07 compute-2 nova_compute[226433]: 2026-01-22 14:34:07.993 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.112 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.113 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.113 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Creating image(s)
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.139 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.165 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.194 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.197 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "e47f52dd8ba9b9798349c19f2b626bd4b933ad74" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.197 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "e47f52dd8ba9b9798349c19f2b626bd4b933ad74" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:08 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:08 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:08 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2163323487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:08 compute-2 nova_compute[226433]: 2026-01-22 14:34:08.522 226437 DEBUG nova.policy [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dffdbec5046d4aaf94146923e1681ea1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f3ac78c8a3fa42b39e64829385672445', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:34:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:08.906+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:08 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:08.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:09 compute-2 nova_compute[226433]: 2026-01-22 14:34:09.035 226437 DEBUG nova.virt.libvirt.imagebackend [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Image locations are: [{'url': 'rbd://088fe176-0106-5401-803c-2da38b73b76a/images/a2fdc415-533a-451d-9678-120e6e30afc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://088fe176-0106-5401-803c-2da38b73b76a/images/a2fdc415-533a-451d-9678-120e6e30afc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 22 14:34:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:09 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:09 compute-2 ceph-mon[77081]: pgmap v1983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 27 op/s
Jan 22 14:34:09 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:09.876+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:09 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:09.879 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:34:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:09.880 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:34:09 compute-2 nova_compute[226433]: 2026-01-22 14:34:09.891 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.004 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Successfully created port: e581f563-3369-4b65-92c8-89785e787b51 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.267 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.290 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.355 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.357 226437 DEBUG nova.virt.images [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] a2fdc415-533a-451d-9678-120e6e30afc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.359 226437 DEBUG nova.privsep.utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.359 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.540 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.544 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.595 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.597 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "e47f52dd8ba9b9798349c19f2b626bd4b933ad74" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.626 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:10 compute-2 nova_compute[226433]: 2026-01-22 14:34:10.630 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:10 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:10 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:10.832+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:10.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:11.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.040 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Successfully updated port: e581f563-3369-4b65-92c8-89785e787b51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.050 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.082 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.083 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquired lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.083 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.131 226437 DEBUG nova.compute.manager [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-changed-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.132 226437 DEBUG nova.compute.manager [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing instance network info cache due to event network-changed-e581f563-3369-4b65-92c8-89785e787b51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.132 226437 DEBUG oslo_concurrency.lockutils [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.139 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] resizing rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.239 226437 DEBUG nova.objects.instance [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lazy-loading 'migration_context' on Instance uuid 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.256 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.256 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Ensure instance console log exists: /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.257 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.258 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.258 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:11 compute-2 nova_compute[226433]: 2026-01-22 14:34:11.341 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:34:11 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:11.793+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:11 compute-2 ceph-mon[77081]: pgmap v1984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 22 14:34:11 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.111 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updating instance_info_cache with network_info: [{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.269 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Releasing lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.269 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance network_info: |[{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.270 226437 DEBUG oslo_concurrency.lockutils [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.270 226437 DEBUG nova.network.neutron [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing network info cache for port e581f563-3369-4b65-92c8-89785e787b51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.276 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start _get_guest_xml network_info=[{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T14:33:57Z,direct_url=<?>,disk_format='qcow2',id=a2fdc415-533a-451d-9678-120e6e30afc5,min_disk=0,min_ram=0,name='',owner='fedf0aaa09a64f7ba34cf04c2e4f7c97',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T14:33:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'scsi', 'device_name': '/dev/sda', 'image_id': 'a2fdc415-533a-451d-9678-120e6e30afc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.282 226437 WARNING nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.318 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.319 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.323 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.324 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.326 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.326 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T14:33:57Z,direct_url=<?>,disk_format='qcow2',id=a2fdc415-533a-451d-9678-120e6e30afc5,min_disk=0,min_ram=0,name='',owner='fedf0aaa09a64f7ba34cf04c2e4f7c97',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T14:33:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.327 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.328 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.328 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.329 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.329 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.330 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.330 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.331 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.331 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.332 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.337 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:34:12 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3564186839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.803 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:12 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:12.810+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:12 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:12 compute-2 ceph-mon[77081]: pgmap v1985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 621 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 27 op/s
Jan 22 14:34:12 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3564186839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.828 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:12 compute-2 nova_compute[226433]: 2026-01-22 14:34:12.833 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:12 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:12.882 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 14:34:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:12.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 14:34:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:34:13 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1951221719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.265 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.267 226437 DEBUG nova.virt.libvirt.vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:34:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-673350482',display_name='tempest-AttachSCSIVolumeTestJSON-server-673350482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-673350482',id=22,image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPD0y3mq9CfOHokaR31LEO/NdlTki7hmL1Lmoupuqg1kWxHy0vOWCB8Qr7HBmO03ylnoCixzCBjeQqzIRrpgVE512GDKdI5XzcntJi8Mu2wzHF18nKGhhZcU5kWNmNOuYA==',key_name='tempest-keypair-2020706736',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3ac78c8a3fa42b39e64829385672445',ramdisk_id='',reservation_id='r-9hbea8q0',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-952968705',owner_user_name='tempest-AttachSCSIVolumeTestJSON-952968705-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:34:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dffdbec5046d4aaf94146923e1681ea1',uuid=839e8e64-64a9-4e35-85dd-cdbb7f8e71c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.268 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converting VIF {"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.269 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.270 226437 DEBUG nova.objects.instance [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lazy-loading 'pci_devices' on Instance uuid 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.301 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <uuid>839e8e64-64a9-4e35-85dd-cdbb7f8e71c5</uuid>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <name>instance-00000016</name>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <memory>131072</memory>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:name>tempest-AttachSCSIVolumeTestJSON-server-673350482</nova:name>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:34:12</nova:creationTime>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:flavor name="m1.nano">
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:memory>128</nova:memory>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:user uuid="dffdbec5046d4aaf94146923e1681ea1">tempest-AttachSCSIVolumeTestJSON-952968705-project-member</nova:user>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:project uuid="f3ac78c8a3fa42b39e64829385672445">tempest-AttachSCSIVolumeTestJSON-952968705</nova:project>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="a2fdc415-533a-451d-9678-120e6e30afc5"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <nova:ports>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <nova:port uuid="e581f563-3369-4b65-92c8-89785e787b51">
Jan 22 14:34:13 compute-2 nova_compute[226433]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         </nova:port>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </nova:ports>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <system>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <entry name="serial">839e8e64-64a9-4e35-85dd-cdbb7f8e71c5</entry>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <entry name="uuid">839e8e64-64a9-4e35-85dd-cdbb7f8e71c5</entry>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </system>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <os>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </os>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <features>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </features>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk">
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </source>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <target dev="sda" bus="scsi"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <address type="drive" controller="0" unit="0"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config">
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </source>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:34:13 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <target dev="sdb" bus="scsi"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <address type="drive" controller="0" unit="1"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="scsi" index="0" model="virtio-scsi"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <interface type="ethernet">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <mac address="fa:16:3e:35:f2:b5"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <driver name="vhost" rx_queue_size="512"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <mtu size="1442"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <target dev="tape581f563-33"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </interface>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/console.log" append="off"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <video>
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </video>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:34:13 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:34:13 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:34:13 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:34:13 compute-2 nova_compute[226433]: </domain>
Jan 22 14:34:13 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.303 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Preparing to wait for external event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.303 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.304 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.304 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.305 226437 DEBUG nova.virt.libvirt.vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:34:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-673350482',display_name='tempest-AttachSCSIVolumeTestJSON-server-673350482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-673350482',id=22,image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPD0y3mq9CfOHokaR31LEO/NdlTki7hmL1Lmoupuqg1kWxHy0vOWCB8Qr7HBmO03ylnoCixzCBjeQqzIRrpgVE512GDKdI5XzcntJi8Mu2wzHF18nKGhhZcU5kWNmNOuYA==',key_name='tempest-keypair-2020706736',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3ac78c8a3fa42b39e64829385672445',ramdisk_id='',reservation_id='r-9hbea8q0',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-952968705',owner_user_name='tempest-AttachSCSIVolumeTestJSON-952968705-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:34:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dffdbec5046d4aaf94146923e1681ea1',uuid=839e8e64-64a9-4e35-85dd-cdbb7f8e71c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.306 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converting VIF {"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.306 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.307 226437 DEBUG os_vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.308 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.308 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.309 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.313 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.314 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape581f563-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.314 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape581f563-33, col_values=(('external_ids', {'iface-id': 'e581f563-3369-4b65-92c8-89785e787b51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:f2:b5', 'vm-uuid': '839e8e64-64a9-4e35-85dd-cdbb7f8e71c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:13 compute-2 NetworkManager[49000]: <info>  [1769092453.3173] manager: (tape581f563-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.316 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.319 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.325 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.326 226437 INFO os_vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33')
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.375 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.376 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.376 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] No VIF found with MAC fa:16:3e:35:f2:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.377 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Using config drive
Jan 22 14:34:13 compute-2 nova_compute[226433]: 2026-01-22 14:34:13.400 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:13 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:13 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:13 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1951221719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:13.843+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:13 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:14 compute-2 podman[252410]: 2026-01-22 14:34:14.068242112 +0000 UTC m=+0.119230626 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:34:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:14 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:14 compute-2 ceph-mon[77081]: pgmap v1986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 640 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 409 KiB/s wr, 30 op/s
Jan 22 14:34:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:14.886+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:14 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:14.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:15.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.283 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.386 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Creating config drive at /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.395 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp61amt8xc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.529 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp61amt8xc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.556 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.560 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.759 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.760 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Deleting local config drive /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config because it was imported into RBD.
Jan 22 14:34:15 compute-2 NetworkManager[49000]: <info>  [1769092455.8298] manager: (tape581f563-33): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 22 14:34:15 compute-2 kernel: tape581f563-33: entered promiscuous mode
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.837 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:15 compute-2 ovn_controller[133156]: 2026-01-22T14:34:15Z|00057|binding|INFO|Claiming lport e581f563-3369-4b65-92c8-89785e787b51 for this chassis.
Jan 22 14:34:15 compute-2 ovn_controller[133156]: 2026-01-22T14:34:15Z|00058|binding|INFO|e581f563-3369-4b65-92c8-89785e787b51: Claiming fa:16:3e:35:f2:b5 10.100.0.11
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.850 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:f2:b5 10.100.0.11'], port_security=['fa:16:3e:35:f2:b5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '839e8e64-64a9-4e35-85dd-cdbb7f8e71c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e70febd3-9995-42cd-a322-30bf5db3445d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3ac78c8a3fa42b39e64829385672445', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28729834-6047-40c0-87ed-a5757ce1c57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8526bd5b-b1c9-4a14-b4ce-8f8562154268, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=e581f563-3369-4b65-92c8-89785e787b51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.853 143497 INFO neutron.agent.ovn.metadata.agent [-] Port e581f563-3369-4b65-92c8-89785e787b51 in datapath e70febd3-9995-42cd-a322-30bf5db3445d bound to our chassis
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.856 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e70febd3-9995-42cd-a322-30bf5db3445d
Jan 22 14:34:15 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:15 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:15.858+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:15 compute-2 systemd-udevd[252494]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.870 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0db13fcd-9350-496f-be04-86ddaccdcf45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.871 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape70febd3-91 in ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.874 237689 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape70febd3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.874 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[71db2b83-41f4-4c9a-93ab-70b270062635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.875 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[236964a7-fed6-4172-8e96-0950c34fb08a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 systemd-machined[194970]: New machine qemu-6-instance-00000016.
Jan 22 14:34:15 compute-2 NetworkManager[49000]: <info>  [1769092455.8828] device (tape581f563-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 14:34:15 compute-2 NetworkManager[49000]: <info>  [1769092455.8834] device (tape581f563-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.888 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[130d2c05-01a9-49e3-b8f6-6f68315c8ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 systemd[1]: Started Virtual Machine qemu-6-instance-00000016.
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.914 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7723447c-9103-4169-ade3-72c5877a6e91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.934 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.937 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[38046e87-465f-4e31-bce6-ca4351f74ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 ovn_controller[133156]: 2026-01-22T14:34:15Z|00059|binding|INFO|Setting lport e581f563-3369-4b65-92c8-89785e787b51 ovn-installed in OVS
Jan 22 14:34:15 compute-2 ovn_controller[133156]: 2026-01-22T14:34:15Z|00060|binding|INFO|Setting lport e581f563-3369-4b65-92c8-89785e787b51 up in Southbound
Jan 22 14:34:15 compute-2 nova_compute[226433]: 2026-01-22 14:34:15.941 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.944 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[5f470856-e040-47a2-8cb0-b0af7c7c574f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 NetworkManager[49000]: <info>  [1769092455.9458] manager: (tape70febd3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.975 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb106f1-7089-465d-a4a6-aba7925f6da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:15 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.978 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[8f36051a-11ce-43cb-8682-648ddfd6f9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 NetworkManager[49000]: <info>  [1769092456.0049] device (tape70febd3-90): carrier: link connected
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.010 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3c116a-ef33-46ad-9fdc-0afa18c29b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.030 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[5d16bfe4-fc0a-424a-afc7-84d0c7fca592]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape70febd3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:0c:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630671, 'reachable_time': 20364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252531, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.046 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[24b3259f-8be6-4eaf-91e3-f8e2c9f11cf6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:c26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630671, 'tstamp': 630671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252533, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.061 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd623e4-8c94-499b-991c-7b3683a64dbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape70febd3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:0c:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630671, 'reachable_time': 20364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252534, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.093 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dc77e7-fb2e-4bfb-9f3e-a85126bf3376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.145 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[86da6caf-a609-4433-8ace-f555714bf187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.146 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape70febd3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.147 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.147 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape70febd3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.266 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:16 compute-2 NetworkManager[49000]: <info>  [1769092456.2669] manager: (tape70febd3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 22 14:34:16 compute-2 kernel: tape70febd3-90: entered promiscuous mode
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.272 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.273 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape70febd3-90, col_values=(('external_ids', {'iface-id': '3c983055-ff9e-4976-9d9f-e2b4b8598736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:16 compute-2 ovn_controller[133156]: 2026-01-22T14:34:16Z|00061|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.294 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.299 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.300 143497 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e70febd3-9995-42cd-a322-30bf5db3445d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e70febd3-9995-42cd-a322-30bf5db3445d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.301 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0906251e-fcc0-4a4a-964a-709789f6e945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.302 143497 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: global
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     log         /dev/log local0 debug
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     log-tag     haproxy-metadata-proxy-e70febd3-9995-42cd-a322-30bf5db3445d
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     user        root
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     group       root
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     maxconn     1024
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     pidfile     /var/lib/neutron/external/pids/e70febd3-9995-42cd-a322-30bf5db3445d.pid.haproxy
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     daemon
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: defaults
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     log global
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     mode http
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     option httplog
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     option dontlognull
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     option http-server-close
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     option forwardfor
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     retries                 3
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     timeout http-request    30s
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     timeout connect         30s
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     timeout client          32s
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     timeout server          32s
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     timeout http-keep-alive 30s
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: listen listener
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     bind 169.254.169.254:80
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     server metadata /var/lib/neutron/metadata_proxy
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:     http-request add-header X-OVN-Network-ID e70febd3-9995-42cd-a322-30bf5db3445d
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 22 14:34:16 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.302 143497 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'env', 'PROCESS_TAG=haproxy-e70febd3-9995-42cd-a322-30bf5db3445d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e70febd3-9995-42cd-a322-30bf5db3445d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.443 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092456.4430985, 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.445 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] VM Started (Lifecycle Event)
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.486 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.491 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092456.443212, 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.491 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] VM Paused (Lifecycle Event)
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.522 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.527 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.549 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.563 226437 DEBUG nova.compute.manager [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.563 226437 DEBUG oslo_concurrency.lockutils [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.564 226437 DEBUG oslo_concurrency.lockutils [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.564 226437 DEBUG oslo_concurrency.lockutils [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.564 226437 DEBUG nova.compute.manager [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Processing event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.565 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.570 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092456.5692425, 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.570 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] VM Resumed (Lifecycle Event)
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.571 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.575 226437 INFO nova.virt.libvirt.driver [-] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance spawned successfully.
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.575 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.579 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.579 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.580 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.580 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.589 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.593 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.640 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.661 226437 INFO nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Took 8.55 seconds to spawn the instance on the hypervisor.
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.662 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:16 compute-2 podman[252613]: 2026-01-22 14:34:16.670319826 +0000 UTC m=+0.058805116 container create 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 14:34:16 compute-2 systemd[1]: Started libpod-conmon-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857.scope.
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.727 226437 INFO nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Took 9.57 seconds to build instance.
Jan 22 14:34:16 compute-2 podman[252613]: 2026-01-22 14:34:16.636628088 +0000 UTC m=+0.025113458 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 14:34:16 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:34:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d345afaa304af39e2e2833fda5b6655c176308d120bb6c3c940577074f3c39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 14:34:16 compute-2 nova_compute[226433]: 2026-01-22 14:34:16.757 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:16 compute-2 podman[252613]: 2026-01-22 14:34:16.770376026 +0000 UTC m=+0.158861316 container init 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 22 14:34:16 compute-2 podman[252613]: 2026-01-22 14:34:16.782612145 +0000 UTC m=+0.171097435 container start 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:34:16 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : New worker (252635) forked
Jan 22 14:34:16 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : Loading success.
Jan 22 14:34:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:16.852+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:16 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:16 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:16 compute-2 ceph-mon[77081]: pgmap v1987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.888458) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456888540, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1353, "num_deletes": 251, "total_data_size": 2459334, "memory_usage": 2501440, "flush_reason": "Manual Compaction"}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456903515, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 1594051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57722, "largest_seqno": 59070, "table_properties": {"data_size": 1588533, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14237, "raw_average_key_size": 20, "raw_value_size": 1576433, "raw_average_value_size": 2314, "num_data_blocks": 118, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092378, "oldest_key_time": 1769092378, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15115 microseconds, and 8111 cpu microseconds.
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.903582) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 1594051 bytes OK
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.903606) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.905150) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.905176) EVENT_LOG_v1 {"time_micros": 1769092456905167, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.905202) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2452768, prev total WAL file size 2452768, number of live WAL files 2.
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.906494) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(1556KB)], [114(8825KB)]
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456906538, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 10631789, "oldest_snapshot_seqno": -1}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 10249 keys, 8931494 bytes, temperature: kUnknown
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456968493, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 8931494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8875681, "index_size": 29077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 277545, "raw_average_key_size": 27, "raw_value_size": 8701348, "raw_average_value_size": 848, "num_data_blocks": 1083, "num_entries": 10249, "num_filter_entries": 10249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.968722) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8931494 bytes
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.970052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 10770, records dropped: 521 output_compression: NoCompression
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.970066) EVENT_LOG_v1 {"time_micros": 1769092456970059, "job": 72, "event": "compaction_finished", "compaction_time_micros": 62020, "compaction_time_cpu_micros": 21711, "output_level": 6, "num_output_files": 1, "total_output_size": 8931494, "num_input_records": 10770, "num_output_records": 10249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456970372, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456971597, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.906404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:34:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:16.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:17.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"} v 0) v1
Jan 22 14:34:17 compute-2 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 14:34:17 compute-2 nova_compute[226433]: 2026-01-22 14:34:17.412 226437 DEBUG nova.network.neutron [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updated VIF entry in instance network info cache for port e581f563-3369-4b65-92c8-89785e787b51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:34:17 compute-2 nova_compute[226433]: 2026-01-22 14:34:17.414 226437 DEBUG nova.network.neutron [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updating instance_info_cache with network_info: [{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:17 compute-2 nova_compute[226433]: 2026-01-22 14:34:17.558 226437 DEBUG oslo_concurrency.lockutils [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:17.867+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:17 compute-2 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:17 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:17 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 14:34:17 compute-2 ceph-mon[77081]: from='client.? ' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 14:34:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.318 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.687 226437 DEBUG nova.compute.manager [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.689 226437 DEBUG oslo_concurrency.lockutils [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.689 226437 DEBUG oslo_concurrency.lockutils [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.690 226437 DEBUG oslo_concurrency.lockutils [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.691 226437 DEBUG nova.compute.manager [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] No waiting events found dispatching network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.691 226437 WARNING nova.compute.manager [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received unexpected event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 for instance with vm_state active and task_state None.
Jan 22 14:34:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:18.885+0000 7f47f8ed4640 -1 osd.2 152 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:18 compute-2 ceph-osd[79779]: osd.2 152 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 22 14:34:18 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:18 compute-2 ceph-mon[77081]: from='client.? ' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]': finished
Jan 22 14:34:18 compute-2 ceph-mon[77081]: osdmap e152: 3 total, 3 up, 3 in
Jan 22 14:34:18 compute-2 ceph-mon[77081]: pgmap v1989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 22 14:34:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3489887515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:34:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3489887515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:34:18 compute-2 nova_compute[226433]: 2026-01-22 14:34:18.975 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'trusted_certs' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:34:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:18.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.110 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.110 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Ensure instance console log exists: /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.111 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.111 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.112 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.113 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.117 226437 WARNING nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.122 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.123 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.126 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.127 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.127 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='253cca7e-43a2-469f-8e4b-fd8b7bc3551a',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.130 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.130 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.147 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:34:19 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2725787254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.579 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5810] manager: (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/38)
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5820] device (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <warn>  [1769092459.5822] device (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5840] manager: (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/39)
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5849] device (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <warn>  [1769092459.5850] device (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5869] manager: (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5883] manager: (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5893] device (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 14:34:19 compute-2 NetworkManager[49000]: <info>  [1769092459.5901] device (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.608 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.649 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:19.863+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:19 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.887 226437 DEBUG nova.compute.manager [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-changed-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.887 226437 DEBUG nova.compute.manager [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing instance network info cache due to event network-changed-e581f563-3369-4b65-92c8-89785e787b51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.888 226437 DEBUG oslo_concurrency.lockutils [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.888 226437 DEBUG oslo_concurrency.lockutils [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.888 226437 DEBUG nova.network.neutron [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing network info cache for port e581f563-3369-4b65-92c8-89785e787b51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:34:19 compute-2 nova_compute[226433]: 2026-01-22 14:34:19.903 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:19 compute-2 ovn_controller[133156]: 2026-01-22T14:34:19Z|00062|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 14:34:20 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1676911576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:20 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:20 compute-2 ceph-mon[77081]: osdmap e153: 3 total, 3 up, 3 in
Jan 22 14:34:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2725787254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.115 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.119 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] End _get_guest_xml xml=<domain type="kvm">
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <uuid>33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</uuid>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <name>instance-00000015</name>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <memory>196608</memory>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <vcpu>1</vcpu>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <metadata>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:name>tempest-MigrationsAdminTest-server-685681022</nova:name>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:creationTime>2026-01-22 14:34:19</nova:creationTime>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:flavor name="m1.micro">
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:memory>192</nova:memory>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:disk>1</nova:disk>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:swap>0</nova:swap>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:ephemeral>0</nova:ephemeral>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:vcpus>1</nova:vcpus>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       </nova:flavor>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:owner>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:user uuid="549def9aedaa41be8d41ae7c6e534303">tempest-MigrationsAdminTest-775661994-project-member</nova:user>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <nova:project uuid="98a3ce5a8a524b0d8327784d9df9a9db">tempest-MigrationsAdminTest-775661994</nova:project>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       </nova:owner>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <nova:ports/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </nova:instance>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </metadata>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <sysinfo type="smbios">
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <system>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <entry name="manufacturer">RDO</entry>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <entry name="product">OpenStack Compute</entry>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <entry name="serial">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <entry name="uuid">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <entry name="family">Virtual Machine</entry>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </system>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </sysinfo>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <os>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <boot dev="hd"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <smbios mode="sysinfo"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </os>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <features>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <acpi/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <apic/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <vmcoreinfo/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </features>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <clock offset="utc">
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <timer name="pit" tickpolicy="delay"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <timer name="hpet" present="no"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </clock>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <cpu mode="custom" match="exact">
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <model>Nehalem</model>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <topology sockets="1" cores="1" threads="1"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </cpu>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   <devices>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <disk type="network" device="disk">
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk">
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       </source>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <target dev="vda" bus="virtio"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <disk type="network" device="cdrom">
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <driver type="raw" cache="none"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config">
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <host name="192.168.122.100" port="6789"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <host name="192.168.122.102" port="6789"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <host name="192.168.122.101" port="6789"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       </source>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <auth username="openstack">
Jan 22 14:34:20 compute-2 nova_compute[226433]:         <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       </auth>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <target dev="sda" bus="sata"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </disk>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <serial type="pty">
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <log file="/var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log" append="off"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </serial>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <video>
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <model type="virtio"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </video>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <input type="tablet" bus="usb"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <rng model="virtio">
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <backend model="random">/dev/urandom</backend>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </rng>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="pci" model="pcie-root-port"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <controller type="usb" index="0"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     <memballoon model="virtio">
Jan 22 14:34:20 compute-2 nova_compute[226433]:       <stats period="10"/>
Jan 22 14:34:20 compute-2 nova_compute[226433]:     </memballoon>
Jan 22 14:34:20 compute-2 nova_compute[226433]:   </devices>
Jan 22 14:34:20 compute-2 nova_compute[226433]: </domain>
Jan 22 14:34:20 compute-2 nova_compute[226433]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.184 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.185 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.185 226437 INFO nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Using config drive
Jan 22 14:34:20 compute-2 systemd-machined[194970]: New machine qemu-7-instance-00000015.
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.285 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:20 compute-2 systemd[1]: Started Virtual Machine qemu-7-instance-00000015.
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.678 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092460.6774466, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.678 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Resumed (Lifecycle Event)
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.680 226437 DEBUG nova.compute.manager [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.684 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance running successfully.
Jan 22 14:34:20 compute-2 virtqemud[225907]: argument unsupported: QEMU guest agent is not configured
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.697 226437 DEBUG nova.virt.libvirt.guest [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.697 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.702 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.705 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.801 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.801 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092460.6804512, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.801 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Started (Lifecycle Event)
Jan 22 14:34:20 compute-2 sudo[252823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:20 compute-2 sudo[252823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:20 compute-2 sudo[252823]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.854 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 22 14:34:20 compute-2 nova_compute[226433]: 2026-01-22 14:34:20.858 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 22 14:34:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:20.876+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:20 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:20 compute-2 sudo[252848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:20 compute-2 sudo[252848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:20 compute-2 sudo[252848]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:20.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:21 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:21 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1676911576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:21 compute-2 ceph-mon[77081]: pgmap v1991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 22 14:34:21 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.602 226437 DEBUG nova.network.neutron [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updated VIF entry in instance network info cache for port e581f563-3369-4b65-92c8-89785e787b51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.603 226437 DEBUG nova.network.neutron [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updating instance_info_cache with network_info: [{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.625 226437 DEBUG oslo_concurrency.lockutils [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.715 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.716 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.716 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:34:21 compute-2 nova_compute[226433]: 2026-01-22 14:34:21.882 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:34:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:21.907+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:21 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.118 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.122 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.162 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:22 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:22 compute-2 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 22 14:34:22 compute-2 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000015.scope: Consumed 1.989s CPU time.
Jan 22 14:34:22 compute-2 systemd-machined[194970]: Machine qemu-7-instance-00000015 terminated.
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.399 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance destroyed successfully.
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.400 226437 DEBUG nova.objects.instance [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'resources' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.418 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.418 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.438 226437 DEBUG nova.objects.instance [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'migration_context' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 22 14:34:22 compute-2 nova_compute[226433]: 2026-01-22 14:34:22.633 226437 DEBUG oslo_concurrency.processutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:22.951+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:22 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:23.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:34:23 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1366398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.077 226437 DEBUG oslo_concurrency.processutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.083 226437 DEBUG nova.compute.provider_tree [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.106 226437 DEBUG nova.scheduler.client.report [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:34:23 compute-2 ceph-mon[77081]: pgmap v1992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 22 14:34:23 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:23 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1366398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.319 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.497 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.647 226437 INFO nova.compute.manager [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Swapping old allocation on dict_keys(['d4dcb68c-0009-4467-a6f7-0e9fe0236fbc']) held by migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0 for instance
Jan 22 14:34:23 compute-2 nova_compute[226433]: 2026-01-22 14:34:23.687 226437 DEBUG nova.scheduler.client.report [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Overwriting current allocation {'allocations': {'d4dcb68c-0009-4467-a6f7-0e9fe0236fbc': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 20}}, 'project_id': '98a3ce5a8a524b0d8327784d9df9a9db', 'user_id': '549def9aedaa41be8d41ae7c6e534303', 'consumer_generation': 1} on consumer 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 22 14:34:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:23.927+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:23 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:24 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:24 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.359 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.360 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.360 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.554 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.858 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.875 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.877 226437 DEBUG nova.virt.libvirt.driver [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 22 14:34:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:24.942+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:24 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:24 compute-2 nova_compute[226433]: 2026-01-22 14:34:24.984 226437 DEBUG nova.storage.rbd_utils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rolling back rbd image(33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 22 14:34:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:24.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:25 compute-2 ceph-mon[77081]: pgmap v1993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 667 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 23 KiB/s wr, 186 op/s
Jan 22 14:34:25 compute-2 nova_compute[226433]: 2026-01-22 14:34:25.288 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:25.893+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:25 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:26 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 14:34:26 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:26.925+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:26 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:27 compute-2 ceph-mon[77081]: pgmap v1994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 706 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 1.9 MiB/s wr, 294 op/s
Jan 22 14:34:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:27.923+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:27 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:28 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.350 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.585 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "f3b9aec5-45fa-4006-a7ca-285acc598bef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.586 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "f3b9aec5-45fa-4006-a7ca-285acc598bef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.602 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.658 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.659 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.666 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.667 226437 INFO nova.compute.claims [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Claim successful on node compute-2.ctlplane.example.com
Jan 22 14:34:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:28.928+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:28 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:28 compute-2 nova_compute[226433]: 2026-01-22 14:34:28.991 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:28.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:29 compute-2 ceph-mon[77081]: pgmap v1995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 706 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.6 MiB/s wr, 242 op/s
Jan 22 14:34:29 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 14:34:29 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3535101709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.416 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.422 226437 DEBUG nova.compute.provider_tree [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.453 226437 DEBUG nova.scheduler.client.report [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.488 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.490 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 22 14:34:29 compute-2 ovn_controller[133156]: 2026-01-22T14:34:29Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:f2:b5 10.100.0.11
Jan 22 14:34:29 compute-2 ovn_controller[133156]: 2026-01-22T14:34:29Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:f2:b5 10.100.0.11
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.541 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.541 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.573 226437 INFO nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.601 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.658 226437 INFO nova.virt.block_device [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Booting with volume e82f562e-a2cc-4c3f-b1a7-890d6620c280 at /dev/vda
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.776 226437 DEBUG nova.policy [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b8229aedbc64b9691880a91d559e987', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7efa67e548af42419a603e06c3b85f6d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.891 226437 DEBUG os_brick.utils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.894 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.909 248518 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.910 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[2d03872c-c15d-4ed0-9c4d-0e65aff645f7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.912 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.920 248518 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.921 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e49359-af62-4c3f-8e03-a0710e1a2fe2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:5333c49f4ca5', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.923 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.940 248518 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.940 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fbccdc-e88c-42cb-9f38-b1d64f499efa]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.943 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0bef1a-39db-4e3f-b1c6-4a7e788a80e6]: (4, '5492a354-d192-4c48-8602-99be1884b049') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.944 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 22 14:34:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:29.973+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:29 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.979 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.983 226437 DEBUG os_brick.initiator.connectors.lightos [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.983 226437 DEBUG os_brick.initiator.connectors.lightos [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.984 226437 DEBUG os_brick.initiator.connectors.lightos [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.985 226437 DEBUG os_brick.utils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:5333c49f4ca5', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '5492a354-d192-4c48-8602-99be1884b049', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 22 14:34:29 compute-2 nova_compute[226433]: 2026-01-22 14:34:29.985 226437 DEBUG nova.virt.block_device [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Updating existing volume attachment record: 8698cd44-8fb9-487d-b8fc-95b1321557d8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 22 14:34:30 compute-2 nova_compute[226433]: 2026-01-22 14:34:30.290 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:30 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3535101709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 14:34:30 compute-2 ovn_controller[133156]: 2026-01-22T14:34:30Z|00063|memory|INFO|peak resident set size grew 52% in last 2912.6 seconds, from 16256 kB to 24736 kB
Jan 22 14:34:30 compute-2 ovn_controller[133156]: 2026-01-22T14:34:30Z|00064|memory|INFO|idl-cells-OVN_Southbound:10969 idl-cells-Open_vSwitch:927 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:365 lflow-cache-entries-cache-matches:292 lflow-cache-size-KB:1519 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:641 ofctrl_installed_flow_usage-KB:468 ofctrl_sb_flow_ref_usage-KB:241
Jan 22 14:34:30 compute-2 nova_compute[226433]: 2026-01-22 14:34:30.533 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Successfully created port: bf1e3b76-b4f9-4981-a960-f071d92bc35f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 22 14:34:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:30.973+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:30 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.066 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.069 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.070 226437 INFO nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Creating image(s)
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.071 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.071 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Ensure instance console log exists: /var/lib/nova/instances/f3b9aec5-45fa-4006-a7ca-285acc598bef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.072 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.073 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.073 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.285 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Successfully updated port: bf1e3b76-b4f9-4981-a960-f071d92bc35f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.315 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.316 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquired lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.317 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 22 14:34:31 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:31 compute-2 ceph-mon[77081]: pgmap v1996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 715 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.5 MiB/s wr, 246 op/s
Jan 22 14:34:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2532096136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.424 226437 DEBUG nova.compute.manager [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Received event network-changed-bf1e3b76-b4f9-4981-a960-f071d92bc35f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.424 226437 DEBUG nova.compute.manager [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Refreshing instance network info cache due to event network-changed-bf1e3b76-b4f9-4981-a960-f071d92bc35f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.425 226437 DEBUG oslo_concurrency.lockutils [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 22 14:34:31 compute-2 nova_compute[226433]: 2026-01-22 14:34:31.560 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 22 14:34:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:31.941+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:31 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:32 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:32.922+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:32 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:33 compute-2 podman[252973]: 2026-01-22 14:34:33.050347714 +0000 UTC m=+0.087546216 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:34:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.353 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.403 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Updating instance_info_cache with network_info: [{"id": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "address": "fa:16:3e:8d:4d:dc", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1e3b76-b4", "ovs_interfaceid": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 22 14:34:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:33 compute-2 ceph-mon[77081]: pgmap v1997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 745 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 227 op/s
Jan 22 14:34:33 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:33 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.451 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Releasing lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.452 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Instance network_info: |[{"id": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "address": "fa:16:3e:8d:4d:dc", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1e3b76-b4", "ovs_interfaceid": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.453 226437 DEBUG oslo_concurrency.lockutils [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.454 226437 DEBUG nova.network.neutron [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Refreshing network info cache for port bf1e3b76-b4f9-4981-a960-f071d92bc35f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.461 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start _get_guest_xml network_info=[{"id": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "address": "fa:16:3e:8d:4d:dc", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1e3b76-b4", "ovs_interfaceid": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'mount_device': '/dev/vda', 'device_type': 'disk', 'attachment_id': '8698cd44-8fb9-487d-b8fc-95b1321557d8', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e82f562e-a2cc-4c3f-b1a7-890d6620c280', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e82f562e-a2cc-4c3f-b1a7-890d6620c280', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f3b9aec5-45fa-4006-a7ca-285acc598bef', 'attached_at': '', 'detached_at': '', 'volume_id': 'e82f562e-a2cc-4c3f-b1a7-890d6620c280', 'serial': 'e82f562e-a2cc-4c3f-b1a7-890d6620c280'}, 'guest_format': None, 'disk_bus': 'virtio', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.469 226437 WARNING nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.484 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.485 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.489 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.489 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.491 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.491 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.493 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.493 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.493 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.494 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.494 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 22 14:34:33 compute-2 nova_compute[226433]: 2026-01-22 14:34:33.494 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 22 14:34:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:33.952+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:33 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:34 compute-2 ceph-mon[77081]: pgmap v1998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Jan 22 14:34:34 compute-2 sshd-session[253008]: Invalid user ubnt from 45.148.10.121 port 47954
Jan 22 14:34:34 compute-2 sshd-session[253008]: Connection closed by invalid user ubnt 45.148.10.121 port 47954 [preauth]
Jan 22 14:34:34 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:34.922+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:35.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:35 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:35.932+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:35 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:36 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:36 compute-2 ceph-mon[77081]: pgmap v1999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 22 14:34:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:36.905+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:36 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:37.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:37 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3468 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:37.925+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:37 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:38 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:38 compute-2 ceph-mon[77081]: pgmap v2000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 22 14:34:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:38.899+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:38 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:39.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:39 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:39.895+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:39 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:40 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:40 compute-2 ceph-mon[77081]: pgmap v2001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 22 14:34:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:40.868+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:40 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:41.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:41 compute-2 sudo[253014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:41 compute-2 sudo[253014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:41 compute-2 sudo[253014]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:41 compute-2 sudo[253039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:34:41 compute-2 sudo[253039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:34:41 compute-2 sudo[253039]: pam_unix(sudo:session): session closed for user root
Jan 22 14:34:41 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:41.886+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:41 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:42 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:42 compute-2 ceph-mon[77081]: pgmap v2002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 22 14:34:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:42.924+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:42 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:43.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:43 compute-2 ovsdb-server[47215]: ovs|00005|reconnect|ERR|tcp:127.0.0.1:40134: no response to inactivity probe after 5 seconds, disconnecting
Jan 22 14:34:43 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:43 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:43.968+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:43 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:44 compute-2 ceph-mon[77081]: pgmap v2003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 88 KiB/s wr, 14 op/s
Jan 22 14:34:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:44.984+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:44 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:45 compute-2 podman[253066]: 2026-01-22 14:34:45.068893156 +0000 UTC m=+0.113348927 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:34:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:45 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:46.023+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:46 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:46 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:46 compute-2 ceph-mon[77081]: pgmap v2004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 22 14:34:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:47.057+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:47 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:34:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:34:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:34:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:34:47 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:48.028+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:48 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:48 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:48 compute-2 ceph-mon[77081]: pgmap v2005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 22 14:34:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:48.990+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:48 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:49.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:49.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:49 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:50.019+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:50 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:50 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:50 compute-2 ceph-mon[77081]: pgmap v2006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 22 14:34:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:34:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:51.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:34:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:51.053+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:51 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:51 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:52.057+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:52 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:52 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:52 compute-2 ceph-mon[77081]: pgmap v2007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 14:34:52 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:53.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:53.037+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:53 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:54.061+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:54 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:54 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:55.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:55.079+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:55 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:55 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 14:34:55 compute-2 ceph-mon[77081]: pgmap v2008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 22 14:34:56 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:56.104+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:56 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:57.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:57.095+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:57 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:57 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:57 compute-2 ceph-mon[77081]: pgmap v2009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 1023 B/s wr, 0 op/s
Jan 22 14:34:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:58.094+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:58 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:58 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:58 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 3488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:34:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:34:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:59.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:34:59 compute-2 sshd-session[253098]: Invalid user ubuntu from 45.148.10.240 port 47802
Jan 22 14:34:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:59.066+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:59 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:34:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:34:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:34:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:34:59 compute-2 sshd-session[253098]: Connection closed by invalid user ubuntu 45.148.10.240 port 47802 [preauth]
Jan 22 14:34:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:34:59 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:34:59 compute-2 ceph-mon[77081]: pgmap v2010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 1023 B/s wr, 0 op/s
Jan 22 14:35:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:00.046+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:00 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:00 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:00 compute-2 ovn_controller[133156]: 2026-01-22T14:35:00Z|00065|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 22 14:35:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:01.002+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:01 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:01 compute-2 sudo[253102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:01 compute-2 sudo[253102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:01 compute-2 sudo[253102]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:01 compute-2 sudo[253127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:01 compute-2 sudo[253127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:01 compute-2 sudo[253127]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:01 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:01 compute-2 ceph-mon[77081]: pgmap v2011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 2.8 KiB/s wr, 0 op/s
Jan 22 14:35:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:02.030+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:02 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:02 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:03.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:03.036+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:03 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:03 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:03 compute-2 ceph-mon[77081]: pgmap v2012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 1.9 KiB/s wr, 1 op/s
Jan 22 14:35:03 compute-2 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 3493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:04 compute-2 podman[253153]: 2026-01-22 14:35:04.013615259 +0000 UTC m=+0.065731211 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:35:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:04.061+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:04 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:04 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 14:35:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:05.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:05.088+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:05 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:05 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:05 compute-2 ceph-mon[77081]: pgmap v2013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 3.2 KiB/s wr, 1 op/s
Jan 22 14:35:05 compute-2 sudo[253175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:05 compute-2 sudo[253175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:05 compute-2 sudo[253175]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:05 compute-2 sudo[253200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:35:05 compute-2 sudo[253200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:05 compute-2 sudo[253200]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:05 compute-2 sudo[253225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:05 compute-2 sudo[253225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:05 compute-2 sudo[253225]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:05 compute-2 sudo[253250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:35:05 compute-2 sudo[253250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:06 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:06.086+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:06 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:06 compute-2 sudo[253250]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:06 compute-2 sudo[253306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:06 compute-2 sudo[253306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:06 compute-2 sudo[253306]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:06 compute-2 sudo[253331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:35:06 compute-2 sudo[253331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:06 compute-2 sudo[253331]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:06 compute-2 sudo[253356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:06 compute-2 sudo[253356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:06 compute-2 sudo[253356]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:06 compute-2 sudo[253382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 14:35:06 compute-2 sudo[253382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:07 compute-2 sudo[253382]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:07.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:07.089+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:07 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:07.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:07 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:07 compute-2 ceph-mon[77081]: pgmap v2014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 14:35:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:08.061+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:08 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:08 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:08 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:09.062+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:09 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:09 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:09 compute-2 ceph-mon[77081]: pgmap v2015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 14:35:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:10.100+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:10 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:10 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:35:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:35:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:11.117+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:11 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:11 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:11 compute-2 ceph-mon[77081]: pgmap v2016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 4.2 KiB/s wr, 1 op/s
Jan 22 14:35:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:12.100+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:12 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:12 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:13.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:13.114+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:13 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:13 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:13 compute-2 ceph-mon[77081]: pgmap v2017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 22 14:35:13 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:14.118+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:14 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:14 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:14 compute-2 ceph-mon[77081]: pgmap v2018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 22 14:35:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:15.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:15.097+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:15 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:15 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:16.054+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:16 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:16 compute-2 podman[253428]: 2026-01-22 14:35:16.081165205 +0000 UTC m=+0.131345589 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:35:16 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:16 compute-2 ceph-mon[77081]: pgmap v2019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 22 14:35:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:35:16 compute-2 sudo[253454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:16 compute-2 sudo[253454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:16 compute-2 sudo[253454]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:16 compute-2 sudo[253479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:35:16 compute-2 sudo[253479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:16 compute-2 sudo[253479]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:17.034+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:17 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:17.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:17 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:17 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:18.054+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:18 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:18 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:18 compute-2 ceph-mon[77081]: pgmap v2020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:35:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1274904820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:35:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:19.023+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:19 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:35:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:19.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:35:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:19 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:20 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:20.011+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:20 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:20 compute-2 ceph-mon[77081]: pgmap v2021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:21.005+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:21 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:35:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:21.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:35:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:21 compute-2 sudo[253507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:21 compute-2 sudo[253507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:21 compute-2 sudo[253507]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:22.579+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:22 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:22 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:22 compute-2 sudo[253532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:22 compute-2 sudo[253532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:22 compute-2 sudo[253532]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:23.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:23 compute-2 ceph-mon[77081]: pgmap v2022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:23 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:23.624+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:23 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:24 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:24 compute-2 ceph-mon[77081]: pgmap v2023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:24.652+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:24 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:25.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:25.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:25 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:25.673+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:25 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:26 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 14:35:26 compute-2 ceph-mon[77081]: pgmap v2024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:26.655+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:26 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:35:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:35:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:27 compute-2 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:27 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:27.690+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:27 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:28.646+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:28 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:28 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:28 compute-2 ceph-mon[77081]: pgmap v2025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:35:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 60K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1931 writes, 9853 keys, 1931 commit groups, 1.0 writes per commit group, ingest: 16.84 MB, 0.03 MB/s
                                           Interval WAL: 1931 writes, 1931 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.2      0.81              0.22        36    0.023       0      0       0.0       0.0
                                             L6      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    139.2    118.1      2.65              0.92        35    0.076    271K    19K       0.0       0.0
                                            Sum      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9    106.5    109.0      3.46              1.13        71    0.049    271K    19K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    136.5    136.9      0.56              0.27        14    0.040     72K   3610       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    139.2    118.1      2.65              0.92        35    0.076    271K    19K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     79.6      0.81              0.22        35    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.063, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.37 GB write, 0.10 MB/s write, 0.36 GB read, 0.10 MB/s read, 3.5 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 40.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.00025 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2160,39.01 MB,12.8307%) FilterBlock(71,759.30 KB,0.243915%) IndexBlock(71,1.03 MB,0.340045%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:35:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:29.648+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:29 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:29 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:30.667+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:30 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:30 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:30 compute-2 ceph-mon[77081]: pgmap v2026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:31.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:31 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:31.707+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:31 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:32.700+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:32 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:32 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:32 compute-2 ceph-mon[77081]: pgmap v2027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:33.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:33.675+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:33 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:33 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:33 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:34.695+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:34 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:34 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:34 compute-2 ceph-mon[77081]: pgmap v2028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:34 compute-2 podman[253565]: 2026-01-22 14:35:34.98324485 +0000 UTC m=+0.049234529 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:35:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:35.683+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:35 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:35 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:36.633+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:36 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:36 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:36 compute-2 ceph-mon[77081]: pgmap v2029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:37.656+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:37 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:37 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:38.625+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:38 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:38 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:38 compute-2 ceph-mon[77081]: pgmap v2030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:39.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:35:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:39.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:35:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:39.579+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:39 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:39 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:39 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:40.533+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:40 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:40 compute-2 ceph-mon[77081]: pgmap v2031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:40 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:41.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:41.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:41.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:41 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:41 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:42.452+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:42 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:42 compute-2 sudo[253588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:42 compute-2 sudo[253588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:42 compute-2 sudo[253588]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:42 compute-2 sudo[253614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:35:42 compute-2 sudo[253614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:35:42 compute-2 sudo[253614]: pam_unix(sudo:session): session closed for user root
Jan 22 14:35:43 compute-2 ceph-mon[77081]: pgmap v2032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:43 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:43 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:43.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:43.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:43 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:44 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:44.400+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:44 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:45 compute-2 ceph-mon[77081]: pgmap v2033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 22 14:35:45 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:35:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:45.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:35:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:45.363+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:45 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:46 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:46.390+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:46 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:47 compute-2 podman[253641]: 2026-01-22 14:35:47.021864052 +0000 UTC m=+0.084083751 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:35:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:47 compute-2 ceph-mon[77081]: pgmap v2034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:47 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:47.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:35:47.210 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:35:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:35:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:35:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:35:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:35:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:47.365+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:47 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:48 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:48 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:48.332+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:48 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:49 compute-2 ceph-mon[77081]: pgmap v2035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:49 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:49.367+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:49 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:50 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:50.347+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:50 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:51 compute-2 ceph-mon[77081]: pgmap v2036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:51 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:51.316+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:51 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:52 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:52.288+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:52 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:53.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:53 compute-2 ceph-mon[77081]: pgmap v2037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:53 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:53 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:53.324+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:53 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:54 compute-2 sshd-session[253671]: Invalid user ubuntu from 92.118.39.95 port 41768
Jan 22 14:35:54 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:54.370+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:54 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:54 compute-2 sshd-session[253671]: Connection closed by invalid user ubuntu 92.118.39.95 port 41768 [preauth]
Jan 22 14:35:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:55 compute-2 ceph-mon[77081]: pgmap v2038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:55 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:55.375+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:55 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:56 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:56.365+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:56 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:57 compute-2 ceph-mon[77081]: pgmap v2039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:57 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:35:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:57.376+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:57 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:35:57 compute-2 sshd-session[253675]: banner exchange: Connection from 3.137.73.221 port 45292: invalid format
Jan 22 14:35:58 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:35:58 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:35:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:58.358+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:58 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:35:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:35:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:35:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:35:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:35:59 compute-2 ceph-mon[77081]: pgmap v2040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:35:59 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:35:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:59.367+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:59 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:35:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:00 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:00.415+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:00 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:01.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:01 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:01 compute-2 ceph-mon[77081]: pgmap v2041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:01 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:02 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:02.390+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:02 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.545676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562545740, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1728, "num_deletes": 255, "total_data_size": 3241730, "memory_usage": 3312272, "flush_reason": "Manual Compaction"}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562563052, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 2109887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59075, "largest_seqno": 60798, "table_properties": {"data_size": 2103097, "index_size": 3605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17067, "raw_average_key_size": 20, "raw_value_size": 2088285, "raw_average_value_size": 2546, "num_data_blocks": 155, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092457, "oldest_key_time": 1769092457, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 17505 microseconds, and 11035 cpu microseconds.
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.563175) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 2109887 bytes OK
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.563208) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.565668) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.565694) EVENT_LOG_v1 {"time_micros": 1769092562565685, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.565721) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3233627, prev total WAL file size 3233627, number of live WAL files 2.
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.568271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(2060KB)], [117(8722KB)]
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562568383, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 11041381, "oldest_snapshot_seqno": -1}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 10538 keys, 10878022 bytes, temperature: kUnknown
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562662817, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 10878022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10818607, "index_size": 31975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26373, "raw_key_size": 285143, "raw_average_key_size": 27, "raw_value_size": 10637558, "raw_average_value_size": 1009, "num_data_blocks": 1202, "num_entries": 10538, "num_filter_entries": 10538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.663967) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10878022 bytes
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.665746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.9 rd, 114.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.4) write-amplify(5.2) OK, records in: 11069, records dropped: 531 output_compression: NoCompression
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.665785) EVENT_LOG_v1 {"time_micros": 1769092562665768, "job": 74, "event": "compaction_finished", "compaction_time_micros": 95288, "compaction_time_cpu_micros": 60307, "output_level": 6, "num_output_files": 1, "total_output_size": 10878022, "num_input_records": 11069, "num_output_records": 10538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562666731, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562669998, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.567949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:02 compute-2 sudo[253679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:02 compute-2 sudo[253679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:02 compute-2 sudo[253679]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:02 compute-2 sudo[253704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:02 compute-2 sudo[253704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:02 compute-2 sudo[253704]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:03.045 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:36:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:03.047 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:36:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:03.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:03.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:03.348+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:03 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:03 compute-2 ceph-mon[77081]: pgmap v2042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:03 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:03 compute-2 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:04.383+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:04 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:04 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:04 compute-2 sshd-session[253729]: banner exchange: Connection from 3.137.73.221 port 46686: invalid format
Jan 22 14:36:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:05.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:05.425+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:05 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:05 compute-2 ceph-mon[77081]: pgmap v2043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:05 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:06 compute-2 podman[253731]: 2026-01-22 14:36:06.010595209 +0000 UTC m=+0.064444405 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 14:36:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:06.438+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:06 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:06 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:06 compute-2 ceph-mon[77081]: pgmap v2044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:07.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:07 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:07 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 14:36:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:08.417+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:08 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:08 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:08 compute-2 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:08 compute-2 ceph-mon[77081]: pgmap v2045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:36:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:36:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:09.408+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:09 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:09 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:10.394+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:10 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:10 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:10 compute-2 ceph-mon[77081]: pgmap v2046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:11 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:11.050 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:36:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:11.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:11.431+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:11 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:11 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:12.403+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:12 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:12 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:12 compute-2 ceph-mon[77081]: pgmap v2047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:13.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:13.437+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:13 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:13 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:13 compute-2 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:14.415+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:14 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:14 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:14 compute-2 ceph-mon[77081]: pgmap v2048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:14 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:15.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:15.444+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:15 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:15 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:16.395+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:16 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:16 compute-2 ceph-mon[77081]: pgmap v2049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:16 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:16 compute-2 sudo[253757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:16 compute-2 sudo[253757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:16 compute-2 sudo[253757]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:16 compute-2 sudo[253782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:36:16 compute-2 sudo[253782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:16 compute-2 sudo[253782]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:16 compute-2 sudo[253807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:16 compute-2 sudo[253807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:16 compute-2 sudo[253807]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:17 compute-2 sudo[253832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:36:17 compute-2 sudo[253832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:17.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:17 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:17.348+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:17 compute-2 sudo[253832]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:18 compute-2 podman[253888]: 2026-01-22 14:36:18.046799231 +0000 UTC m=+0.098516568 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 14:36:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:18.392+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:18 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:18 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:18 compute-2 ceph-mon[77081]: pgmap v2050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/472125160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:36:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/472125160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:36:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:36:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:36:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:19.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:19.357+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:19 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:19 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:36:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:36:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:20.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:20 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:20 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:20 compute-2 ceph-mon[77081]: pgmap v2051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:20 compute-2 ovn_controller[133156]: 2026-01-22T14:36:20Z|00066|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:36:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:21.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:21.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:21 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:21 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:22.528+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:22 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:23 compute-2 sudo[253917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:23 compute-2 sudo[253917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:23 compute-2 sudo[253917]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:23 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:23 compute-2 ceph-mon[77081]: pgmap v2052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:23 compute-2 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:23 compute-2 sudo[253942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:23 compute-2 sudo[253942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:23 compute-2 sudo[253942]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:23.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:23.537+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:23 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:24 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:24 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:24.515+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:24 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:25.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:25 compute-2 ceph-mon[77081]: pgmap v2053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:25 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:36:25 compute-2 sudo[253968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:25 compute-2 sudo[253968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:25 compute-2 sudo[253968]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:25 compute-2 sudo[253993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:36:25 compute-2 sudo[253993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:25 compute-2 sudo[253993]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:25.500+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:25 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:26 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:26.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:26 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:36:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:27.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:36:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:27.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:27 compute-2 ceph-mon[77081]: pgmap v2054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:27 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 14:36:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:27.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:27 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:28.529+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:28 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:28 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:28 compute-2 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:29.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:29.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:29.544+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:29 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:30 compute-2 ceph-mon[77081]: pgmap v2055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:30 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:30.564+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:30 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:31 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:31 compute-2 ceph-mon[77081]: pgmap v2056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:31 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Cumulative writes: 9301 writes, 35K keys, 9301 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9301 writes, 2538 syncs, 3.66 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1397 writes, 4636 keys, 1397 commit groups, 1.0 writes per commit group, ingest: 4.06 MB, 0.01 MB/s
                                           Interval WAL: 1397 writes, 614 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:36:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:31.589+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:31 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:32 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:32.544+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:32 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:33.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:33 compute-2 ceph-mon[77081]: pgmap v2057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:33 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:33.520+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:33 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:34 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 3583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:34 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:34.497+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:34 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:35 compute-2 ceph-mon[77081]: pgmap v2058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:35 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:35.538+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:35 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:36 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:36.566+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:36 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:37 compute-2 podman[254025]: 2026-01-22 14:36:37.03804764 +0000 UTC m=+0.084289506 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 14:36:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:37 compute-2 ceph-mon[77081]: pgmap v2059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:37 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:37.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:37.549+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:37 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:38 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:38.540+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:38 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:39 compute-2 ceph-mon[77081]: pgmap v2060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:39 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:39.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:39.561+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:39 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:40 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:40.536+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:40 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:41.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:41 compute-2 ceph-mon[77081]: pgmap v2061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:41 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:41.511+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:41 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:42 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:42.497+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:42 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 14:36:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 14:36:43 compute-2 sudo[254047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:43 compute-2 sudo[254047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:43 compute-2 sudo[254047]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:43.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:43 compute-2 sudo[254072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:36:43 compute-2 sudo[254072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:36:43 compute-2 sudo[254072]: pam_unix(sudo:session): session closed for user root
Jan 22 14:36:43 compute-2 ceph-mon[77081]: pgmap v2062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:43 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 3593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:43 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:43.486+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:43 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:44 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:44.492+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:44 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:36:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:36:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:45.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:45 compute-2 ceph-mon[77081]: pgmap v2063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:45 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:45 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:45.493+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:46 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:46 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:46.459+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:36:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:47.212 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:36:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:47.214 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:36:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:47.214 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:36:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:47.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:47 compute-2 ceph-mon[77081]: pgmap v2064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:47 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:36:48 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 3598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:48 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:48.698 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:36:48 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:48.700 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:36:48 compute-2 ovn_controller[133156]: 2026-01-22T14:36:48Z|00067|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:36:49 compute-2 podman[254100]: 2026-01-22 14:36:49.059080915 +0000 UTC m=+0.113395746 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 14:36:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:49 compute-2 ceph-mon[77081]: pgmap v2065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:51 compute-2 ceph-mon[77081]: pgmap v2066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:51 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:36:51.702 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:36:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:53 compute-2 ceph-mon[77081]: pgmap v2067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:53 compute-2 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:54 compute-2 ceph-mon[77081]: pgmap v2068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:36:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:36:57 compute-2 ceph-mon[77081]: pgmap v2069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:36:58 compute-2 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.356001) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618357970, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 969, "num_deletes": 251, "total_data_size": 1656180, "memory_usage": 1681392, "flush_reason": "Manual Compaction"}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618368777, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 1077454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60803, "largest_seqno": 61767, "table_properties": {"data_size": 1073224, "index_size": 1818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10654, "raw_average_key_size": 20, "raw_value_size": 1064219, "raw_average_value_size": 2027, "num_data_blocks": 79, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092563, "oldest_key_time": 1769092563, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 12268 microseconds, and 6604 cpu microseconds.
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.368841) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 1077454 bytes OK
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.368871) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.370919) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.370942) EVENT_LOG_v1 {"time_micros": 1769092618370934, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.370970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1651234, prev total WAL file size 1651234, number of live WAL files 2.
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.372121) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(1052KB)], [120(10MB)]
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618372699, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 11955476, "oldest_snapshot_seqno": -1}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 10548 keys, 10380298 bytes, temperature: kUnknown
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618458427, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 10380298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10321110, "index_size": 31684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 286335, "raw_average_key_size": 27, "raw_value_size": 10140182, "raw_average_value_size": 961, "num_data_blocks": 1185, "num_entries": 10548, "num_filter_entries": 10548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.458704) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10380298 bytes
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.461430) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.3 rd, 121.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(20.7) write-amplify(9.6) OK, records in: 11063, records dropped: 515 output_compression: NoCompression
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.461452) EVENT_LOG_v1 {"time_micros": 1769092618461440, "job": 76, "event": "compaction_finished", "compaction_time_micros": 85814, "compaction_time_cpu_micros": 35004, "output_level": 6, "num_output_files": 1, "total_output_size": 10380298, "num_input_records": 11063, "num_output_records": 10548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618461837, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618464449, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.372068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:58 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:36:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:36:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:36:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:36:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:36:59 compute-2 ceph-mon[77081]: pgmap v2070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:01 compute-2 ceph-mon[77081]: pgmap v2071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:37:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:03 compute-2 sudo[254134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:03 compute-2 sudo[254134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:03 compute-2 sudo[254134]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:03 compute-2 sudo[254159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:03 compute-2 sudo[254159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:03 compute-2 sudo[254159]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:03 compute-2 ceph-mon[77081]: pgmap v2072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 0 B/s wr, 105 op/s
Jan 22 14:37:03 compute-2 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:04 compute-2 ceph-mon[77081]: pgmap v2073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Jan 22 14:37:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:05.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:07.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:07 compute-2 ceph-mon[77081]: pgmap v2074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 14:37:08 compute-2 podman[254186]: 2026-01-22 14:37:08.019191381 +0000 UTC m=+0.072275929 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:37:08 compute-2 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:09.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:09 compute-2 ceph-mon[77081]: pgmap v2075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 14:37:09 compute-2 sshd-session[254207]: banner exchange: Connection from 3.137.73.221 port 46080: invalid format
Jan 22 14:37:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:11 compute-2 ceph-mon[77081]: pgmap v2076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 22 14:37:12 compute-2 ceph-mon[77081]: pgmap v2077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 22 14:37:12 compute-2 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 14:37:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 14:37:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.004000099s ======
Jan 22 14:37:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000099s
Jan 22 14:37:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:15 compute-2 ceph-mon[77081]: pgmap v2078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 22 14:37:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:17 compute-2 ceph-mon[77081]: pgmap v2079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 22 14:37:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:17.412+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:17 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:18 compute-2 sshd-session[254213]: Invalid user ubuntu from 45.148.10.240 port 46584
Jan 22 14:37:18 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:18 compute-2 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:18 compute-2 sshd-session[254213]: Connection closed by invalid user ubuntu 45.148.10.240 port 46584 [preauth]
Jan 22 14:37:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:18.433+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:18 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:37:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:37:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:37:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:37:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:19 compute-2 ceph-mon[77081]: pgmap v2080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:19 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:37:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:37:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:19.392+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:19 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:20 compute-2 podman[254216]: 2026-01-22 14:37:20.024104189 +0000 UTC m=+0.084559983 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 14:37:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:20.401+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:20 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:21.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:21 compute-2 ceph-mon[77081]: pgmap v2081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:21 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:21.402+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:21 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:21 compute-2 ovn_controller[133156]: 2026-01-22T14:37:21Z|00068|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 22 14:37:22 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:22.427+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:22 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:23.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:23 compute-2 ceph-mon[77081]: pgmap v2082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:23 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:23 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:23.469+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:23 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:23 compute-2 sudo[254245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:23 compute-2 sudo[254245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:23 compute-2 sudo[254245]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:23 compute-2 sudo[254270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:23 compute-2 sudo[254270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:23 compute-2 sudo[254270]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:24 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:24.453+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:24 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:25.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:25.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:25.440+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:25 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:25 compute-2 ceph-mon[77081]: pgmap v2083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:25 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:25 compute-2 sudo[254296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:25 compute-2 sudo[254296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:25 compute-2 sudo[254296]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:25 compute-2 sudo[254321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:37:25 compute-2 sudo[254321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:25 compute-2 sudo[254321]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:25 compute-2 sudo[254346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:25 compute-2 sudo[254346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:25 compute-2 sudo[254346]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:25 compute-2 sudo[254371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:37:25 compute-2 sudo[254371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:26 compute-2 sudo[254371]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:26.411+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:26 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:26 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:27.422+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:27 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:27 compute-2 ceph-mon[77081]: pgmap v2084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:27 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:37:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:37:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:37:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:37:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:37:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:37:27.752 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:37:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:37:27.754 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:37:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:28.442+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:28 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:28 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:28 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:28 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:37:28.757 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:37:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:29.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:29.474+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:29 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:29 compute-2 ceph-mon[77081]: pgmap v2085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:29 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:30.514+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:30 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:30 compute-2 ceph-mon[77081]: pgmap v2086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:31.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:37:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:37:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:31.530+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:31 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:31 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:32.520+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:32 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:32 compute-2 ceph-mon[77081]: pgmap v2087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:32 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:32 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:33.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:33.561+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:33 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:33 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:34 compute-2 sudo[254434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:34 compute-2 sudo[254434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:34 compute-2 sudo[254434]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:34.576+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:34 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:34 compute-2 sudo[254459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:37:34 compute-2 sudo[254459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:34 compute-2 sudo[254459]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:37:35 compute-2 ceph-mon[77081]: pgmap v2088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:35 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:35.594+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:35 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:36 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:36.545+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:36 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:37 compute-2 ceph-mon[77081]: pgmap v2089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:37 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:37:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:37.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:37:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:37.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:37.553+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:37 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:38 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:38 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:38.563+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:38 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:39 compute-2 podman[254487]: 2026-01-22 14:37:39.009743173 +0000 UTC m=+0.065478604 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:37:39 compute-2 ceph-mon[77081]: pgmap v2090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:39 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:39.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:39.514+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:39 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:40 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:40.534+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:40 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:41 compute-2 ceph-mon[77081]: pgmap v2091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:41 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:41.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:41.525+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:41 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:42 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:42.508+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:42 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:43 compute-2 ceph-mon[77081]: pgmap v2092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:43 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:43 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:43 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:43.479+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:43 compute-2 sudo[254511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:43 compute-2 sudo[254511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:43 compute-2 sudo[254511]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:43 compute-2 sudo[254536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:37:43 compute-2 sudo[254536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:37:43 compute-2 sudo[254536]: pam_unix(sudo:session): session closed for user root
Jan 22 14:37:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:44 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:44 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:44.450+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:45 compute-2 ceph-mon[77081]: pgmap v2093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:45 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:45.407+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:45 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:46.366+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:46 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:46 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:37:47.213 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:37:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:37:47.213 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:37:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:37:47.214 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:37:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:47.392+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:47 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:47 compute-2 ceph-mon[77081]: pgmap v2094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:47 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:48.414+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:48 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:48 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:48 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:48 compute-2 ceph-mon[77081]: pgmap v2095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:48 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:37:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:37:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:49.432+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:49 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:49 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:50.459+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:50 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:50 compute-2 ceph-mon[77081]: pgmap v2096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:50 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:50 compute-2 sshd-session[254507]: Connection closed by 3.137.73.221 port 37162 [preauth]
Jan 22 14:37:51 compute-2 podman[254565]: 2026-01-22 14:37:51.08340331 +0000 UTC m=+0.132515223 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 14:37:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:51.470+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:51 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:51 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:52.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:52 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:52 compute-2 ceph-mon[77081]: pgmap v2097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:52 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:52 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:53.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:53.507+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:53 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:53 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:54.466+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:54 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:54 compute-2 ceph-mon[77081]: pgmap v2098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:54 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 22 14:37:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 22 14:37:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:55.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:55.504+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:55 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:55 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:56.481+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:56 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:56 compute-2 ceph-mon[77081]: pgmap v2099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:56 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:57.449+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:57 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:58 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:58 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:37:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:58.497+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:58 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:59 compute-2 ceph-mon[77081]: pgmap v2100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:37:59 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:37:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:59.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:37:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:37:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:37:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:59.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:37:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:59.470+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:59 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:37:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:00 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:00.470+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:00 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:01 compute-2 ceph-mon[77081]: pgmap v2101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:01 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:01.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:01 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:02 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:02.428+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:02 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:03.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:03 compute-2 ceph-mon[77081]: pgmap v2102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:03 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:03 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:03 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:03.410+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:03 compute-2 sudo[254598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:03 compute-2 sudo[254598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:03 compute-2 sudo[254598]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:03 compute-2 sudo[254623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:03 compute-2 sudo[254623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:03 compute-2 sudo[254623]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:04 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:04.365+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:04 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:05.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:05 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:05.405+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:05 compute-2 ceph-mon[77081]: pgmap v2103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:05 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:06 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:06.397+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:06 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:38:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:07.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:38:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:07.416+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:07 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:07 compute-2 ceph-mon[77081]: pgmap v2104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:07 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:08.382+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:08 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:08 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:08 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:08 compute-2 ceph-mon[77081]: pgmap v2105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:38:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:38:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:09.380+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:09 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:09 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:10 compute-2 podman[254651]: 2026-01-22 14:38:10.033220282 +0000 UTC m=+0.080838982 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 14:38:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:10.343+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:10 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:10 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:10 compute-2 ceph-mon[77081]: pgmap v2106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:10 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:10 compute-2 sshd-session[254670]: Invalid user ubuntu from 92.118.39.95 port 48978
Jan 22 14:38:11 compute-2 sshd-session[254670]: Connection closed by invalid user ubuntu 92.118.39.95 port 48978 [preauth]
Jan 22 14:38:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:11.355+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:11 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:11 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:12.397+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:12 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:12 compute-2 ceph-mon[77081]: pgmap v2107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:12 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:12 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:13.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:13 compute-2 sshd-session[254674]: banner exchange: Connection from 3.137.73.221 port 37382: invalid format
Jan 22 14:38:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:13.418+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:13 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:13 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:14.442+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:14 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:14 compute-2 ceph-mon[77081]: pgmap v2108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:14 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:15.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:15.398+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:15 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:15 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:16.431+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:16 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:16 compute-2 ceph-mon[77081]: pgmap v2109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:16 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:17.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:17.460+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:17 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:17 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:17 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:38:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:38:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:38:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:38:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:18.474+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:18 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:18 compute-2 ceph-mon[77081]: pgmap v2110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:38:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:38:18 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:38:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:38:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:38:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:19.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:38:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:19.462+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:19 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:19 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:20.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:20 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:20 compute-2 ceph-mon[77081]: pgmap v2111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:21 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:38:21.327 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:38:21 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:38:21.328 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:38:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:21.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:21.433+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:21 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:21 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:22 compute-2 podman[254679]: 2026-01-22 14:38:22.008254539 +0000 UTC m=+0.073915735 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 14:38:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:22.407+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:22 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:22 compute-2 ceph-mon[77081]: pgmap v2112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:22 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:22 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:23.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:23.411+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:23 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:23 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:24 compute-2 sudo[254707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:24 compute-2 sudo[254707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:24 compute-2 sudo[254707]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:24 compute-2 sudo[254732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:24 compute-2 sudo[254732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:24 compute-2 sudo[254732]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:24.409+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:24 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:24 compute-2 ceph-mon[77081]: pgmap v2113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:24 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:25.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:25.402+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:25 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:25 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:26.422+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:26 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:26 compute-2 ceph-mon[77081]: pgmap v2114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:26 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:27.419+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:27 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:27 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:27 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:28 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:38:28.332 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:38:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:28.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:28 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:28 compute-2 ceph-mon[77081]: pgmap v2115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:28 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:29.376+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:29 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:30.425+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:30 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:38:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:38:31 compute-2 ceph-mon[77081]: pgmap v2116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:31 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:31.452+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:31 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:32.410+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:32 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:32 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:33.446+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:33 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:33 compute-2 ceph-mon[77081]: pgmap v2117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:33 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:33 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:34 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:34.491+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:34 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:34 compute-2 sudo[254763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:34 compute-2 sudo[254763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:34 compute-2 sudo[254763]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:34 compute-2 sudo[254789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:38:34 compute-2 sudo[254789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:34 compute-2 sudo[254789]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:34 compute-2 sudo[254814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:34 compute-2 sudo[254814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:34 compute-2 sudo[254814]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:34 compute-2 sudo[254839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:38:34 compute-2 sudo[254839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:35 compute-2 sudo[254839]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:35 compute-2 sudo[254884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:35 compute-2 sudo[254884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:35 compute-2 sudo[254884]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:35 compute-2 sudo[254909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:38:35 compute-2 sudo[254909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:35 compute-2 sudo[254909]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:35 compute-2 sudo[254934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:35 compute-2 sudo[254934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:35 compute-2 sudo[254934]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:35 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:35.474+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:35 compute-2 ceph-mon[77081]: pgmap v2118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:35 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:38:35 compute-2 sudo[254959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:38:35 compute-2 sudo[254959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:36 compute-2 podman[255058]: 2026-01-22 14:38:36.072602668 +0000 UTC m=+0.098964688 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:38:36 compute-2 podman[255058]: 2026-01-22 14:38:36.204358488 +0000 UTC m=+0.230720578 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 14:38:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:38:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:36.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:38:36 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:36.442+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:36 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:38:37 compute-2 podman[255210]: 2026-01-22 14:38:37.025449537 +0000 UTC m=+0.070930819 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:38:37 compute-2 podman[255210]: 2026-01-22 14:38:37.037739466 +0000 UTC m=+0.083220718 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:38:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:37 compute-2 podman[255273]: 2026-01-22 14:38:37.328599491 +0000 UTC m=+0.069908938 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, description=keepalived for Ceph, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.component=keepalived-container)
Jan 22 14:38:37 compute-2 podman[255273]: 2026-01-22 14:38:37.346164358 +0000 UTC m=+0.087473765 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived)
Jan 22 14:38:37 compute-2 sudo[254959]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:37 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:37.415+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:37 compute-2 ceph-mon[77081]: pgmap v2119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:37 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:37 compute-2 sudo[255308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:37 compute-2 sudo[255308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:37 compute-2 sudo[255308]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:37 compute-2 sudo[255333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:38:37 compute-2 sudo[255333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:37 compute-2 sudo[255333]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:37 compute-2 sudo[255358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:37 compute-2 sudo[255358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:37 compute-2 sudo[255358]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:37 compute-2 sudo[255383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:38:37 compute-2 sudo[255383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:38 compute-2 sudo[255383]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:38 compute-2 sudo[255441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:38 compute-2 sudo[255441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:38 compute-2 sudo[255441]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:38 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:38.465+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:38 compute-2 sudo[255466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:38:38 compute-2 sudo[255466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:38 compute-2 sudo[255466]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:38 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:38 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:38 compute-2 sudo[255491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:38 compute-2 sudo[255491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:38 compute-2 sudo[255491]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:38 compute-2 sudo[255516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 14:38:38 compute-2 sudo[255516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.115362498 +0000 UTC m=+0.060671301 container create fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 14:38:39 compute-2 systemd[1]: Started libpod-conmon-fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b.scope.
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.08509255 +0000 UTC m=+0.030401423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:38:39 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.212905904 +0000 UTC m=+0.158214747 container init fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.221224864 +0000 UTC m=+0.166534017 container start fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.224647046 +0000 UTC m=+0.169955849 container attach fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:38:39 compute-2 strange_galileo[255601]: 167 167
Jan 22 14:38:39 compute-2 systemd[1]: libpod-fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b.scope: Deactivated successfully.
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.229840522 +0000 UTC m=+0.175149345 container died fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 14:38:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:39.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:39 compute-2 systemd[1]: var-lib-containers-storage-overlay-98ff53d928a7b3a205bb53807d47c18e0a7b8b6bd703d386a0db75c97623ca22-merged.mount: Deactivated successfully.
Jan 22 14:38:39 compute-2 podman[255583]: 2026-01-22 14:38:39.292027168 +0000 UTC m=+0.237335991 container remove fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 14:38:39 compute-2 systemd[1]: libpod-conmon-fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b.scope: Deactivated successfully.
Jan 22 14:38:39 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:39.475+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:39 compute-2 ceph-mon[77081]: pgmap v2120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:39 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:39 compute-2 podman[255624]: 2026-01-22 14:38:39.554826201 +0000 UTC m=+0.057662501 container create 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 22 14:38:39 compute-2 systemd[1]: Started libpod-conmon-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope.
Jan 22 14:38:39 compute-2 podman[255624]: 2026-01-22 14:38:39.53214235 +0000 UTC m=+0.034978730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 14:38:39 compute-2 systemd[1]: Started libcrun container.
Jan 22 14:38:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 14:38:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 14:38:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 14:38:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 14:38:39 compute-2 podman[255624]: 2026-01-22 14:38:39.672032767 +0000 UTC m=+0.174869077 container init 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 14:38:39 compute-2 podman[255624]: 2026-01-22 14:38:39.680447729 +0000 UTC m=+0.183284029 container start 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 14:38:39 compute-2 podman[255624]: 2026-01-22 14:38:39.684584933 +0000 UTC m=+0.187421243 container attach 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 14:38:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:40 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:40.482+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:40 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:40 compute-2 ceph-mon[77081]: pgmap v2121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]: [
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:     {
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "available": false,
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "ceph_device": false,
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "lsm_data": {},
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "lvs": [],
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "path": "/dev/sr0",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "rejected_reasons": [
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "Insufficient space (<5GB)",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "Has a FileSystem"
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         ],
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         "sys_api": {
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "actuators": null,
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "device_nodes": "sr0",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "devname": "sr0",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "human_readable_size": "482.00 KB",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "id_bus": "ata",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "model": "QEMU DVD-ROM",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "nr_requests": "2",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "parent": "/dev/sr0",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "partitions": {},
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "path": "/dev/sr0",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "removable": "1",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "rev": "2.5+",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "ro": "0",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "rotational": "1",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "sas_address": "",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "sas_device_handle": "",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "scheduler_mode": "mq-deadline",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "sectors": 0,
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "sectorsize": "2048",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "size": 493568.0,
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "support_discard": "2048",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "type": "disk",
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:             "vendor": "QEMU"
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:         }
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]:     }
Jan 22 14:38:40 compute-2 nifty_hypatia[255641]: ]
Jan 22 14:38:40 compute-2 podman[255624]: 2026-01-22 14:38:40.918201918 +0000 UTC m=+1.421038268 container died 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 14:38:40 compute-2 systemd[1]: libpod-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope: Deactivated successfully.
Jan 22 14:38:40 compute-2 systemd[1]: libpod-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope: Consumed 1.262s CPU time.
Jan 22 14:38:40 compute-2 systemd[1]: var-lib-containers-storage-overlay-264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d-merged.mount: Deactivated successfully.
Jan 22 14:38:40 compute-2 podman[255624]: 2026-01-22 14:38:40.989990702 +0000 UTC m=+1.492827002 container remove 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 14:38:41 compute-2 systemd[1]: libpod-conmon-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope: Deactivated successfully.
Jan 22 14:38:41 compute-2 sudo[255516]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:41 compute-2 podman[256853]: 2026-01-22 14:38:41.054246679 +0000 UTC m=+0.108390752 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 22 14:38:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:41 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:41.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:41 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:38:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:38:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:42 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:42.507+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:42 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:42 compute-2 ceph-mon[77081]: pgmap v2122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:43.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:43 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:43.519+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:43 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:43 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:44 compute-2 sudo[256887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:44 compute-2 sudo[256887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:44 compute-2 sudo[256887]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:44.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:44 compute-2 sudo[256912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:44 compute-2 sudo[256912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:44 compute-2 sudo[256912]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:44 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:44.547+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:44 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:44 compute-2 ceph-mon[77081]: pgmap v2123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:45.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:45 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:45.524+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:45 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:46 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:46.523+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:46 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:46 compute-2 ceph-mon[77081]: pgmap v2124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:38:47.215 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:38:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:38:47.216 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:38:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:38:47.216 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:38:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:47 compute-2 sudo[256939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:38:47 compute-2 sudo[256939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:47 compute-2 sudo[256939]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:47 compute-2 sudo[256964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:38:47 compute-2 sudo[256964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:38:47 compute-2 sudo[256964]: pam_unix(sudo:session): session closed for user root
Jan 22 14:38:47 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:47.550+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:47 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:38:47 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:47 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:48 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:48.543+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:48 compute-2 ceph-mon[77081]: pgmap v2125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:48 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:49 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:49.562+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:49 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:50 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:50.548+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:50 compute-2 ceph-mon[77081]: pgmap v2126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:50 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:51.525+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:51 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:51 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:52.555+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:52 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:52 compute-2 ceph-mon[77081]: pgmap v2127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:52 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:52 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:53 compute-2 ovn_controller[133156]: 2026-01-22T14:38:53Z|00069|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:38:53 compute-2 podman[256992]: 2026-01-22 14:38:53.141812427 +0000 UTC m=+0.191688651 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:38:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:53.582+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:53 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:53 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:54.616+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:54 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:54 compute-2 ceph-mon[77081]: pgmap v2128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:54 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:54 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:38:54 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 14:38:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:55.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:55.572+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:55 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:55 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:56.585+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:56 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:56 compute-2 ceph-mon[77081]: pgmap v2129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:56 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:38:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:57.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:38:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:57.587+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:57 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:57 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:57 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:38:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:58.599+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:58 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:59 compute-2 ceph-mon[77081]: pgmap v2130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:38:59 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:38:59 compute-2 ovn_controller[133156]: 2026-01-22T14:38:59Z|00070|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:38:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:38:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:38:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:38:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:59.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:38:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:59.593+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:59 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:38:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:00 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:00.576+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:00 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:01 compute-2 ceph-mon[77081]: pgmap v2131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:01 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:01.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:01.550+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:01 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:02 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:02.563+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:02 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:03 compute-2 ceph-mon[77081]: pgmap v2132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:03 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:03 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:03.521+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:03 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:04 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:04 compute-2 sudo[257024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:04 compute-2 sudo[257024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:04 compute-2 sudo[257024]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:04 compute-2 sudo[257049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:04 compute-2 sudo[257049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:04 compute-2 sudo[257049]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:04.567+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:04 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:05 compute-2 ceph-mon[77081]: pgmap v2133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:05 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:05.580+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:05 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:06 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:06.619+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:06 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:07 compute-2 ceph-mon[77081]: pgmap v2134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:07 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:07.659+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:07 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:08 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:08 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:08.624+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:08 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:09 compute-2 ceph-mon[77081]: pgmap v2135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:09 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:09.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:09 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:09.614+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:10 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:10.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:10 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:10.651+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:11 compute-2 ceph-mon[77081]: pgmap v2136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:11 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:11.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:11 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:11.613+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:11 compute-2 podman[257078]: 2026-01-22 14:39:11.988299984 +0000 UTC m=+0.046532767 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:39:12 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:12.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:12 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:12.609+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:13 compute-2 ceph-mon[77081]: pgmap v2137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:13 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:13 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:13 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:13.595+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:14 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:14 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:14.593+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:15 compute-2 ceph-mon[77081]: pgmap v2138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:15 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:15 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:15.545+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:16 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:16 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:16.530+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:17 compute-2 ceph-mon[77081]: pgmap v2139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:17 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:17 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:17.554+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:18 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:18 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:39:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/11134575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:39:18 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:18.589+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:19 compute-2 ceph-mon[77081]: pgmap v2140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:19 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:19 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:19.574+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:20 compute-2 ceph-mon[77081]: pgmap v2141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:20 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:20.551+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:22 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:22.483+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:22 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:39:23.127 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:39:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:39:23.128 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:39:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:23.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:23 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:23.477+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:23 compute-2 ceph-mon[77081]: pgmap v2142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:23 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:23 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:24 compute-2 podman[257103]: 2026-01-22 14:39:24.051532859 +0000 UTC m=+0.110762913 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:39:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:24 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:24.480+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:24.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:24 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:24 compute-2 sudo[257129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:24 compute-2 sudo[257129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:24 compute-2 sudo[257129]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:24 compute-2 sudo[257154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:24 compute-2 sudo[257154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:24 compute-2 sudo[257154]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:25.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:25 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:25.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:25 compute-2 ceph-mon[77081]: pgmap v2143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:25 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:26 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:26.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:26 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:26 compute-2 ceph-mon[77081]: pgmap v2144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:27.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:27 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:27.433+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:27 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:28 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:28.481+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:28 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:28 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:28 compute-2 ceph-mon[77081]: pgmap v2145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:29 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:29.529+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:29 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:30 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:30.532+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:30 compute-2 ceph-mon[77081]: pgmap v2146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:31 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:39:31.130 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:39:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:31.508+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 43 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:31 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 43 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:39:31 compute-2 ceph-mon[77081]: 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 14:39:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:32.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:32.508+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:32 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:32 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/155501559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:39:32 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/155501559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:39:32 compute-2 ceph-mon[77081]: pgmap v2147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 746 MiB data, 621 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:32 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:32 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:33.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:33 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:33 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 14:39:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 14:39:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:34.543+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:34 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:35 compute-2 ceph-mon[77081]: pgmap v2148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 739 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 22 14:39:35 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:35.570+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:35 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:36 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:36.577+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:36 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:37 compute-2 ceph-mon[77081]: pgmap v2149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:37 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:37.622+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:37 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:38 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:38 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:38.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:38.623+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:38 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:38 compute-2 sshd-session[257188]: Invalid user ubuntu from 45.148.10.240 port 34294
Jan 22 14:39:38 compute-2 sshd-session[257188]: Connection closed by invalid user ubuntu 45.148.10.240 port 34294 [preauth]
Jan 22 14:39:39 compute-2 ceph-mon[77081]: pgmap v2150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:39 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:39.607+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:39 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:40 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:40.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:40.628+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:40 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:41 compute-2 ceph-mon[77081]: pgmap v2151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:41 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:41.666+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:41 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:42 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:42.674+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:42 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:43 compute-2 podman[257193]: 2026-01-22 14:39:43.040498323 +0000 UTC m=+0.088253810 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:39:43 compute-2 ovn_controller[133156]: 2026-01-22T14:39:43Z|00071|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:39:43 compute-2 ceph-mon[77081]: pgmap v2152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:43 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:43 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:43 compute-2 ovn_controller[133156]: 2026-01-22T14:39:43Z|00072|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:39:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:43.647+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:43 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:44 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:44.682+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:44 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:44 compute-2 sudo[257213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:44 compute-2 sudo[257213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:44 compute-2 sudo[257213]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:44 compute-2 sudo[257238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:44 compute-2 sudo[257238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:44 compute-2 sudo[257238]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:45 compute-2 ceph-mon[77081]: pgmap v2153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.272527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785272582, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2432, "num_deletes": 251, "total_data_size": 4877991, "memory_usage": 4950528, "flush_reason": "Manual Compaction"}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785299699, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 3171350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61772, "largest_seqno": 64199, "table_properties": {"data_size": 3162263, "index_size": 5325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22746, "raw_average_key_size": 21, "raw_value_size": 3142323, "raw_average_value_size": 2939, "num_data_blocks": 228, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092619, "oldest_key_time": 1769092619, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 27396 microseconds, and 15068 cpu microseconds.
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.299918) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 3171350 bytes OK
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.300014) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301916) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301942) EVENT_LOG_v1 {"time_micros": 1769092785301933, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 4867040, prev total WAL file size 4867040, number of live WAL files 2.
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.304962) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(3097KB)], [123(10137KB)]
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785305056, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 13551648, "oldest_snapshot_seqno": -1}
Jan 22 14:39:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 11098 keys, 11910942 bytes, temperature: kUnknown
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785400548, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11910942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11847322, "index_size": 34772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 299403, "raw_average_key_size": 26, "raw_value_size": 11655659, "raw_average_value_size": 1050, "num_data_blocks": 1311, "num_entries": 11098, "num_filter_entries": 11098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.400871) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11910942 bytes
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.402620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.8 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11617, records dropped: 519 output_compression: NoCompression
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.402637) EVENT_LOG_v1 {"time_micros": 1769092785402629, "job": 78, "event": "compaction_finished", "compaction_time_micros": 95575, "compaction_time_cpu_micros": 52514, "output_level": 6, "num_output_files": 1, "total_output_size": 11910942, "num_input_records": 11617, "num_output_records": 11098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785403164, "job": 78, "event": "table_file_deletion", "file_number": 125}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785404945, "job": 78, "event": "table_file_deletion", "file_number": 123}
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.304851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:39:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:45.660+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:45 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:46 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:46 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:46.698+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:46 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:39:47.216 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:39:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:39:47.217 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:39:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:39:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:39:47 compute-2 ceph-mon[77081]: pgmap v2154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 22 14:39:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:47 compute-2 sudo[257264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:47 compute-2 sudo[257264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-2 sudo[257264]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:47 compute-2 sudo[257289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:39:47 compute-2 sudo[257289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-2 sudo[257289]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:47.663+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:47 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:47 compute-2 sudo[257314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:47 compute-2 sudo[257314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:47 compute-2 sudo[257314]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:47 compute-2 sudo[257339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:39:47 compute-2 sudo[257339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:48 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:48 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:48 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:48 compute-2 sudo[257339]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:48.665+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:48 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:49.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:49 compute-2 ceph-mon[77081]: pgmap v2155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:39:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:39:49 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:49.688+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:49 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:50.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:50.659+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:50 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:51 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:51 compute-2 ceph-mon[77081]: pgmap v2156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:51 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:51 compute-2 ovn_controller[133156]: 2026-01-22T14:39:51Z|00073|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 14:39:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:51.651+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:51 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:52 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:52.671+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:52 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:53 compute-2 ceph-mon[77081]: pgmap v2157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:53 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:53 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:53.660+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:53 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:54 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:54.698+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:54 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:55 compute-2 podman[257400]: 2026-01-22 14:39:55.080646218 +0000 UTC m=+0.133320654 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:39:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:39:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:39:55 compute-2 ceph-mon[77081]: pgmap v2158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:39:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:39:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:55.654+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:55 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:55 compute-2 sudo[257426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:39:55 compute-2 sudo[257426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:55 compute-2 sudo[257426]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:55 compute-2 sudo[257451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:39:55 compute-2 sudo[257451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:39:55 compute-2 sudo[257451]: pam_unix(sudo:session): session closed for user root
Jan 22 14:39:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:56 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:56 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:56.647+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:56 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:57 compute-2 ceph-mon[77081]: pgmap v2159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:57 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:57.670+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:57 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:58.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:58 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:39:58 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:58 compute-2 ceph-mon[77081]: pgmap v2160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:39:58 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:58.715+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:39:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:39:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:39:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:39:59 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:39:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:59.724+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:39:59 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:00 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:00.679+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:00 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 14:40:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 14:40:00 compute-2 ceph-mon[77081]: pgmap v2161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:00 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:01 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:01.666+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:01 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:02 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:02.694+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:02 compute-2 ceph-mon[77081]: pgmap v2162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:02 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.328 143497 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d6334cad-de94-4b67-9127-1d06fa285533 with type ""
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.330 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:f2:b5 10.100.0.11'], port_security=['fa:16:3e:35:f2:b5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '839e8e64-64a9-4e35-85dd-cdbb7f8e71c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e70febd3-9995-42cd-a322-30bf5db3445d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3ac78c8a3fa42b39e64829385672445', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28729834-6047-40c0-87ed-a5757ce1c57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8526bd5b-b1c9-4a14-b4ce-8f8562154268, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=e581f563-3369-4b65-92c8-89785e787b51) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.331 143497 INFO neutron.agent.ovn.metadata.agent [-] Port e581f563-3369-4b65-92c8-89785e787b51 in datapath e70febd3-9995-42cd-a322-30bf5db3445d unbound from our chassis
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.333 143497 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e70febd3-9995-42cd-a322-30bf5db3445d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.335 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc4c367-a858-42f9-ac4c-fa4fb14c83a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.335 143497 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d namespace which is not needed anymore
Jan 22 14:40:03 compute-2 ovn_controller[133156]: 2026-01-22T14:40:03Z|00074|binding|INFO|Removing iface tape581f563-33 ovn-installed in OVS
Jan 22 14:40:03 compute-2 ovn_controller[133156]: 2026-01-22T14:40:03Z|00075|binding|INFO|Removing lport e581f563-3369-4b65-92c8-89785e787b51 ovn-installed in OVS
Jan 22 14:40:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:03.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:03 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : haproxy version is 2.8.14-c23fe91
Jan 22 14:40:03 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : path to executable is /usr/sbin/haproxy
Jan 22 14:40:03 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [WARNING]  (252633) : Exiting Master process...
Jan 22 14:40:03 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [WARNING]  (252633) : Exiting Master process...
Jan 22 14:40:03 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [ALERT]    (252633) : Current worker (252635) exited with code 143 (Terminated)
Jan 22 14:40:03 compute-2 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [WARNING]  (252633) : All workers exited. Exiting... (0)
Jan 22 14:40:03 compute-2 systemd[1]: libpod-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857.scope: Deactivated successfully.
Jan 22 14:40:03 compute-2 podman[257497]: 2026-01-22 14:40:03.506984423 +0000 UTC m=+0.052948916 container died 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 14:40:03 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857-userdata-shm.mount: Deactivated successfully.
Jan 22 14:40:03 compute-2 systemd[1]: var-lib-containers-storage-overlay-32d345afaa304af39e2e2833fda5b6655c176308d120bb6c3c940577074f3c39-merged.mount: Deactivated successfully.
Jan 22 14:40:03 compute-2 podman[257497]: 2026-01-22 14:40:03.561837007 +0000 UTC m=+0.107801470 container cleanup 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 14:40:03 compute-2 systemd[1]: libpod-conmon-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857.scope: Deactivated successfully.
Jan 22 14:40:03 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:03.661+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:03 compute-2 podman[257524]: 2026-01-22 14:40:03.665447598 +0000 UTC m=+0.065696101 container remove 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.674 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5ec947-4867-495b-a631-e549ef402454]: (4, ('Thu Jan 22 02:40:03 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d (43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857)\n43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857\nThu Jan 22 02:40:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d (43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857)\n43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.677 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[415c0713-75c7-4483-a6d9-e3263edbe761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.678 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape70febd3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:40:03 compute-2 kernel: tape70febd3-90: left promiscuous mode
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.719 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[85adeb91-cd02-40be-852c-2f7c61a94b02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.741 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[38e44909-5be1-413a-a450-66d6e3c906ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.744 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[23e2a22d-a72a-418b-b0e9-a4af31947d25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.773 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0710fede-4ae0-49d4-b30a-4c9d6d755edc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630663, 'reachable_time': 36962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257541, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 systemd[1]: run-netns-ovnmeta\x2de70febd3\x2d9995\x2d42cd\x2da322\x2d30bf5db3445d.mount: Deactivated successfully.
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.779 143856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 22 14:40:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.780 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[faa1d9c4-8fba-4ba7-8498-e29dd3cf8f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 22 14:40:03 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:03 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:04 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:04.613+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:04 compute-2 ceph-mon[77081]: pgmap v2163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:04 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:05 compute-2 sudo[257547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:05 compute-2 sudo[257547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:05 compute-2 sudo[257547]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:05 compute-2 sudo[257572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:05 compute-2 sudo[257572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:05 compute-2 sudo[257572]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:05 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:05.586+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:05 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:06.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:06 compute-2 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:06.615+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:07 compute-2 ceph-mon[77081]: pgmap v2164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:07 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 22 14:40:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:07 compute-2 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:07.625+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:08 compute-2 ceph-mon[77081]: osdmap e154: 3 total, 3 up, 3 in
Jan 22 14:40:08 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:08 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:08 compute-2 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:08.668+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:09 compute-2 ceph-mon[77081]: pgmap v2166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 700 MiB data, 601 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:09 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:09 compute-2 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:09.685+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:10 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:10.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:10 compute-2 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:10.702+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:11 compute-2 ceph-mon[77081]: pgmap v2167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 14:40:11 compute-2 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:11.723+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:12 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:12.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 22 14:40:12 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:12.717+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:13 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:13 compute-2 ceph-mon[77081]: pgmap v2168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 14:40:13 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:13 compute-2 ceph-mon[77081]: osdmap e155: 3 total, 3 up, 3 in
Jan 22 14:40:13 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:13.727+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:14 compute-2 podman[257601]: 2026-01-22 14:40:14.029598264 +0000 UTC m=+0.081968779 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:40:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:14 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:14.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:14 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:14.732+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:15 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:15 compute-2 ceph-mon[77081]: pgmap v2170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 14:40:15 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:15.769+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:16 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:16.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:16 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:16.784+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:17 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:17 compute-2 ceph-mon[77081]: pgmap v2171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 22 14:40:17 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:17.806+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:40:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:40:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:40:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:40:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:18 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:18 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:40:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:40:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:18.768+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:18 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:19.756+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:19 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-2 ceph-mon[77081]: pgmap v2172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 14:40:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:20.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:20.794+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:20 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:20 compute-2 ceph-mon[77081]: pgmap v2173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:21 compute-2 sshd-session[257625]: Invalid user ubuntu from 92.118.39.95 port 56184
Jan 22 14:40:21 compute-2 sshd-session[257625]: Connection closed by invalid user ubuntu 92.118.39.95 port 56184 [preauth]
Jan 22 14:40:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:21.772+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:21 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:21 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:22.794+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:22 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:22 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:22 compute-2 ceph-mon[77081]: pgmap v2174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:22 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:23.751+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:23 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:24 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:24.716+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:24 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:25 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:25 compute-2 ceph-mon[77081]: pgmap v2175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:25 compute-2 sudo[257630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:25 compute-2 sudo[257630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:25 compute-2 sudo[257630]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:25 compute-2 sudo[257656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:25 compute-2 sudo[257656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:25 compute-2 sudo[257656]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:25 compute-2 podman[257654]: 2026-01-22 14:40:25.468743642 +0000 UTC m=+0.162548741 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 22 14:40:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:25.766+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:25 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:26 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:26.779+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:26 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:26 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:26.833 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:40:26 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:26.835 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:40:27 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:27 compute-2 ceph-mon[77081]: pgmap v2176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:27.742+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:27 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:28 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:28 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:28.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:28.739+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:28 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:29 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:29 compute-2 ceph-mon[77081]: pgmap v2177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:29.715+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:29 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:30.704+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:30 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:31 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:31 compute-2 ceph-mon[77081]: pgmap v2178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:31.752+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:31 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:32 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:32.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:32.721+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:32 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:33 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:33 compute-2 ceph-mon[77081]: pgmap v2179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:33 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:33.710+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:33 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:34 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:34.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:34.699+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:34 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:35 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:35 compute-2 ceph-mon[77081]: pgmap v2180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:35.713+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:35 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:35 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:35.836 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:40:36 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:36.721+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:36 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:37 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:37 compute-2 ceph-mon[77081]: pgmap v2181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.646390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837646499, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 969, "num_deletes": 256, "total_data_size": 1535676, "memory_usage": 1554616, "flush_reason": "Manual Compaction"}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837660471, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 1008549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64204, "largest_seqno": 65168, "table_properties": {"data_size": 1004293, "index_size": 1779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10872, "raw_average_key_size": 20, "raw_value_size": 995023, "raw_average_value_size": 1842, "num_data_blocks": 77, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092786, "oldest_key_time": 1769092786, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 14116 microseconds, and 7758 cpu microseconds.
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.660525) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 1008549 bytes OK
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.660549) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.662292) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.662336) EVENT_LOG_v1 {"time_micros": 1769092837662303, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.662362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1530677, prev total WAL file size 1530677, number of live WAL files 2.
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.663153) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(984KB)], [126(11MB)]
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837663228, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12919491, "oldest_snapshot_seqno": -1}
Jan 22 14:40:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:37.691+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:37 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 11109 keys, 12766855 bytes, temperature: kUnknown
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837773245, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 12766855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12702136, "index_size": 35870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 300988, "raw_average_key_size": 27, "raw_value_size": 12509152, "raw_average_value_size": 1126, "num_data_blocks": 1353, "num_entries": 11109, "num_filter_entries": 11109, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.773700) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 12766855 bytes
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.775744) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.2 rd, 115.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(25.5) write-amplify(12.7) OK, records in: 11638, records dropped: 529 output_compression: NoCompression
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.775781) EVENT_LOG_v1 {"time_micros": 1769092837775764, "job": 80, "event": "compaction_finished", "compaction_time_micros": 110212, "compaction_time_cpu_micros": 49134, "output_level": 6, "num_output_files": 1, "total_output_size": 12766855, "num_input_records": 11638, "num_output_records": 11109, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837776554, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837781026, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.663036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:37 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:40:38 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:38 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:38 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:38.692+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:38 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:39.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:39 compute-2 ceph-mon[77081]: pgmap v2182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:39 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:39.710+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:39 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:40.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:40.698+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:40 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:41 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:41 compute-2 ceph-mon[77081]: pgmap v2183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:41.661+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:41 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:42 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:42 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:42.644+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:42 compute-2 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 22 14:40:43 compute-2 ceph-mon[77081]: pgmap v2184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:40:43 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:43 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:43.629+0000 7f47f8ed4640 -1 osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:43 compute-2 ceph-osd[79779]: osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:44 compute-2 ceph-mon[77081]: osdmap e156: 3 total, 3 up, 3 in
Jan 22 14:40:44 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:44.598+0000 7f47f8ed4640 -1 osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:44 compute-2 ceph-osd[79779]: osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:44.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:45 compute-2 podman[257719]: 2026-01-22 14:40:45.038417295 +0000 UTC m=+0.095522796 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 14:40:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:40:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:40:45 compute-2 sudo[257738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:45 compute-2 sudo[257738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:45 compute-2 sudo[257738]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:45 compute-2 ceph-mon[77081]: pgmap v2186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Jan 22 14:40:45 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:45 compute-2 sudo[257763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:45 compute-2 sudo[257763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:45 compute-2 sudo[257763]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:45.617+0000 7f47f8ed4640 -1 osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:45 compute-2 ceph-osd[79779]: osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:46 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:46 compute-2 ceph-mon[77081]: pgmap v2187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 1.4 KiB/s wr, 11 op/s
Jan 22 14:40:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 22 14:40:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:46.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:46.629+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:46 compute-2 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:47.217 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:40:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:40:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:40:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:40:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:47 compute-2 ceph-mon[77081]: osdmap e157: 3 total, 3 up, 3 in
Jan 22 14:40:47 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:47.634+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:47 compute-2 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:48 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:48 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:48 compute-2 ceph-mon[77081]: pgmap v2189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 1.7 KiB/s wr, 14 op/s
Jan 22 14:40:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:48.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:48 compute-2 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:48.634+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:49 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:49 compute-2 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:49.681+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:50 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:50 compute-2 ceph-mon[77081]: pgmap v2190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Jan 22 14:40:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:50.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:50 compute-2 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:50.688+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:51 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:51 compute-2 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:51.705+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:40:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:52.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:40:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 22 14:40:52 compute-2 ceph-mon[77081]: pgmap v2191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.5 KiB/s wr, 44 op/s
Jan 22 14:40:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:52.710+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:52 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:53 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:53 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:53 compute-2 ceph-mon[77081]: osdmap e158: 3 total, 3 up, 3 in
Jan 22 14:40:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:53.680+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:53 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:54.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:54 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:54 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:54 compute-2 ceph-mon[77081]: pgmap v2193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Jan 22 14:40:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:54.721+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:54 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:40:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:40:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:55.685+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:55 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:55 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:55 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:55 compute-2 sudo[257793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:55 compute-2 sudo[257793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:55 compute-2 sudo[257793]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:56 compute-2 sudo[257824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:40:56 compute-2 sudo[257824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:56 compute-2 sudo[257824]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:56 compute-2 podman[257817]: 2026-01-22 14:40:56.052927298 +0000 UTC m=+0.127088742 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:40:56 compute-2 sudo[257870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:40:56 compute-2 sudo[257870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:56 compute-2 sudo[257870]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:56 compute-2 sudo[257895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:40:56 compute-2 sudo[257895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:40:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:56.653+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:56 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:56 compute-2 sudo[257895]: pam_unix(sudo:session): session closed for user root
Jan 22 14:40:57 compute-2 ceph-mon[77081]: pgmap v2194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Jan 22 14:40:57 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:57.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:57.611+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:57 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:40:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:40:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:40:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:40:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:40:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:40:58 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:58 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:40:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:58.582+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:58 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:58.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:59 compute-2 ceph-mon[77081]: pgmap v2195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 22 14:40:59 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:40:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:40:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:40:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:40:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:59.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:40:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:59.587+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:59 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:40:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:00 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:00.566+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:00 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:00.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:01 compute-2 ceph-mon[77081]: pgmap v2196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:01 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:01.608+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:01 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:02 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:02.614+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:02 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:03 compute-2 ceph-mon[77081]: pgmap v2197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:03 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:03 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:03.636+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:03 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:03 compute-2 sudo[257955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:03 compute-2 sudo[257955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:03 compute-2 sudo[257955]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:04 compute-2 sudo[257980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:41:04 compute-2 sudo[257980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:04 compute-2 sudo[257980]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:41:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:41:04 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:04.604+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:04 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:05.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:05.557+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:05 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:05 compute-2 ceph-mon[77081]: pgmap v2198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:05 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:05 compute-2 sudo[258006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:05 compute-2 sudo[258006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:05 compute-2 sudo[258006]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:05 compute-2 sudo[258031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:05 compute-2 sudo[258031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:05 compute-2 sudo[258031]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:06.579+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:06 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:06 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:06 compute-2 ceph-mon[77081]: pgmap v2199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:06.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:07 compute-2 ovn_controller[133156]: 2026-01-22T14:41:07Z|00076|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 22 14:41:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:07.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:07 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:07.612+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:07 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:08.612+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:08 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:08 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:08 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:08 compute-2 ceph-mon[77081]: pgmap v2200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:09.570+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:09 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:09 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:10.570+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:10 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:10.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:10 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:10 compute-2 ceph-mon[77081]: pgmap v2201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:11.616+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:11 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:11 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:11 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:12.648+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:12 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:12.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:12 compute-2 ceph-mon[77081]: pgmap v2202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:12 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:12 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:13.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:13.660+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:13 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:13 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:14.616+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:14 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:14.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:14 compute-2 ceph-mon[77081]: pgmap v2203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:14 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:15.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:15.650+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:15 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:15 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:16 compute-2 podman[258061]: 2026-01-22 14:41:16.005206092 +0000 UTC m=+0.062185511 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:41:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:16.649+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:16 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:16 compute-2 ceph-mon[77081]: pgmap v2204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:16 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:17.601+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:17 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.665299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877665372, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 809, "num_deletes": 251, "total_data_size": 1240210, "memory_usage": 1257784, "flush_reason": "Manual Compaction"}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877671522, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 597682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65173, "largest_seqno": 65977, "table_properties": {"data_size": 594250, "index_size": 1147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10006, "raw_average_key_size": 21, "raw_value_size": 586609, "raw_average_value_size": 1264, "num_data_blocks": 49, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092838, "oldest_key_time": 1769092838, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 6251 microseconds, and 3186 cpu microseconds.
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.671574) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 597682 bytes OK
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.671601) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673176) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673189) EVENT_LOG_v1 {"time_micros": 1769092877673185, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1235868, prev total WAL file size 1235868, number of live WAL files 2.
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373537' seq:72057594037927935, type:22 .. '6D6772737461740032303038' seq:0, type:0; will stop at (end)
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(583KB)], [129(12MB)]
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877673935, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 13364537, "oldest_snapshot_seqno": -1}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 11068 keys, 9678808 bytes, temperature: kUnknown
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877751828, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 9678808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9618647, "index_size": 31376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 300644, "raw_average_key_size": 27, "raw_value_size": 9430588, "raw_average_value_size": 852, "num_data_blocks": 1165, "num_entries": 11068, "num_filter_entries": 11068, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.752141) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9678808 bytes
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.753986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 124.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(38.6) write-amplify(16.2) OK, records in: 11573, records dropped: 505 output_compression: NoCompression
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.754003) EVENT_LOG_v1 {"time_micros": 1769092877753994, "job": 82, "event": "compaction_finished", "compaction_time_micros": 77983, "compaction_time_cpu_micros": 37712, "output_level": 6, "num_output_files": 1, "total_output_size": 9678808, "num_input_records": 11573, "num_output_records": 11068, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877754241, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877756377, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:41:17 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:17 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:18.593+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:18 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:18.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2752123973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:41:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2752123973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:41:18 compute-2 ceph-mon[77081]: pgmap v2205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:18 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:19.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:19.551+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:19 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:20 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:20 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:20.578+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:20.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:21 compute-2 ceph-mon[77081]: pgmap v2206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:21 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:21.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:21.556+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:21 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:22 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:22.578+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:22 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:22.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:23 compute-2 ceph-mon[77081]: pgmap v2207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:23 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:23 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:23.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:23.539+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:23 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:24 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:24.492+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:24 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:24.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:25 compute-2 ceph-mon[77081]: pgmap v2208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:25 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:25.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:25 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:25.534+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:25 compute-2 sudo[258087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:25 compute-2 sudo[258087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:25 compute-2 sudo[258087]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:25 compute-2 sudo[258112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:25 compute-2 sudo[258112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:25 compute-2 sudo[258112]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:26 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:26.521+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:26 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:26.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:27 compute-2 podman[258139]: 2026-01-22 14:41:27.068463805 +0000 UTC m=+0.115350682 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:41:27 compute-2 ceph-mon[77081]: pgmap v2209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:27 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:27.536+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:27 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:28 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:28 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:28.537+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:28 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:29 compute-2 ceph-mon[77081]: pgmap v2210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:29 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:29.561+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:29 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:29 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:41:29.628 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:41:29 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:41:29.629 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:41:29 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:41:29.630 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:41:30 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:30.526+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:30 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:31 compute-2 ceph-mon[77081]: pgmap v2211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:31 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:31.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:31.547+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:31 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:32 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:32.508+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:32 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:32.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:33 compute-2 ceph-mon[77081]: pgmap v2212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:33 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:33 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:33.487+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:33 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:33.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:34 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:34.484+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:34 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:35 compute-2 ceph-mon[77081]: pgmap v2213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:35 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:35.485+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:35 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:35.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:36.487+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:36 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:36 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:36.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:37.529+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:37 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:37 compute-2 ceph-mon[77081]: pgmap v2214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:37 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:38.511+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:38 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:38 compute-2 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 14:41:38 compute-2 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:38 compute-2 ceph-mon[77081]: pgmap v2215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:39.473+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:39 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:39.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:39 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:39 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:40.449+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:40 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:40 compute-2 ceph-mon[77081]: pgmap v2216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:40 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:41.414+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:41 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:41.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:41 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:42 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:42.444+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:42 compute-2 ceph-mon[77081]: pgmap v2217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:42 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:42 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:43.434+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:43 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:43.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:43 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:44.474+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:44 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:44.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:44 compute-2 ceph-mon[77081]: pgmap v2218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:44 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:45.440+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:45 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:45.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:45 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:46 compute-2 sudo[258174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:46 compute-2 sudo[258174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:46 compute-2 sudo[258174]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:46 compute-2 sudo[258205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:41:46 compute-2 podman[258198]: 2026-01-22 14:41:46.179084657 +0000 UTC m=+0.085041993 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 14:41:46 compute-2 sudo[258205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:41:46 compute-2 sudo[258205]: pam_unix(sudo:session): session closed for user root
Jan 22 14:41:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:46.423+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:46 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:46.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:47 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:47 compute-2 ceph-mon[77081]: pgmap v2219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:41:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:41:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:41:47.219 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:41:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:41:47.219 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:41:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:47.455+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:47 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:47.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:48 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:48 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:48.457+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:48 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:48.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:49 compute-2 ceph-mon[77081]: pgmap v2220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:41:49 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:49 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:49.453+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:49.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:50 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:50 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:50.407+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:41:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:41:51 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:51 compute-2 ceph-mon[77081]: pgmap v2221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:51 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:51.432+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:51.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:52 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:52 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:52.466+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:52.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:53 compute-2 ceph-mon[77081]: pgmap v2222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:53 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:53 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:53 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:53.501+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:41:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:53.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:41:54 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:54 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:54.511+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:54.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:55 compute-2 ceph-mon[77081]: pgmap v2223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:55 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:55 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:55.557+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:56 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:56 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:56.513+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:56 compute-2 sshd-session[258250]: Invalid user ubuntu from 45.148.10.240 port 51278
Jan 22 14:41:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:56.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:56 compute-2 sshd-session[258250]: Connection closed by invalid user ubuntu 45.148.10.240 port 51278 [preauth]
Jan 22 14:41:57 compute-2 ceph-mon[77081]: pgmap v2224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:57 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:57 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:57.526+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:57.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:58 compute-2 podman[258253]: 2026-01-22 14:41:58.084765015 +0000 UTC m=+0.132219249 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:41:58 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:58 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:41:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:58.563+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:58 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:58.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:41:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:41:59 compute-2 ceph-mon[77081]: pgmap v2225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:41:59 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:59.515+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:59 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:41:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:41:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:41:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:41:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:00 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:00.523+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:00 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:00.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:01 compute-2 ceph-mon[77081]: pgmap v2226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:42:01 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:01.543+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:01 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:01.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:02.501+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:02 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:02 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:03.490+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:03 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:03 compute-2 ceph-mon[77081]: pgmap v2227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:03 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:03 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:04 compute-2 sudo[258283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:04 compute-2 sudo[258283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-2 sudo[258283]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:04 compute-2 sudo[258308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:42:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:04 compute-2 sudo[258308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-2 sudo[258308]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:04 compute-2 sudo[258333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:04 compute-2 sudo[258333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-2 sudo[258333]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:04 compute-2 sudo[258358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:42:04 compute-2 sudo[258358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:04.505+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:04 compute-2 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:04 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 22 14:42:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:05 compute-2 sudo[258358]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:05.550+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:05 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:05 compute-2 ceph-mon[77081]: pgmap v2228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:05 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:05 compute-2 ceph-mon[77081]: osdmap e159: 3 total, 3 up, 3 in
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:42:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:42:06 compute-2 sudo[258415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:06 compute-2 sudo[258415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:06 compute-2 sudo[258415]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:06 compute-2 sudo[258440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:06 compute-2 sudo[258440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:06 compute-2 sudo[258440]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:06.515+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:06 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:06 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:06 compute-2 ceph-mon[77081]: pgmap v2230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Jan 22 14:42:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:07.484+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:07 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:07 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:08.514+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:08 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:08 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 14:42:08 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:08 compute-2 ceph-mon[77081]: pgmap v2231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 KiB/s wr, 13 op/s
Jan 22 14:42:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:09.478+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:09 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:09 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:10.505+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:10 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:11 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:11 compute-2 ceph-mon[77081]: pgmap v2232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:42:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:11.465+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:11 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:12 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:12 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:42:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:42:12 compute-2 sudo[258468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:12 compute-2 sudo[258468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:12 compute-2 sudo[258468]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:12 compute-2 sudo[258493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:42:12 compute-2 sudo[258493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:12 compute-2 sudo[258493]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:12.442+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:12 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:13 compute-2 ceph-mon[77081]: pgmap v2233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:42:13 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:13 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:13.465+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:13 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:14 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:14.509+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:14 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:15 compute-2 ceph-mon[77081]: pgmap v2234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:42:15 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:15.550+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:15 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:16 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:16.503+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:16 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:17 compute-2 podman[258521]: 2026-01-22 14:42:17.011901292 +0000 UTC m=+0.066032613 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:42:17 compute-2 ceph-mon[77081]: pgmap v2235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Jan 22 14:42:17 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:17.456+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:17 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:18 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:18 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:18.466+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:18 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 14:42:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:18.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 14:42:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1612262341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:42:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1612262341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:42:19 compute-2 ceph-mon[77081]: pgmap v2236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:42:19 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:19.435+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:19 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:20 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:20.387+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:20 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:20.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:21.363+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:21 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:21 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:21 compute-2 ceph-mon[77081]: pgmap v2237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:42:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:21.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:22.396+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:22 compute-2 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 22 14:42:22 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:22.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:23 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:23 compute-2 ceph-mon[77081]: osdmap e160: 3 total, 3 up, 3 in
Jan 22 14:42:23 compute-2 ceph-mon[77081]: pgmap v2239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:23 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:23.440+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:23 compute-2 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:23.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:24.393+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:24 compute-2 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:24 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:25.368+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:25 compute-2 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:25 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:25 compute-2 ceph-mon[77081]: pgmap v2240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 614 B/s rd, 0 B/s wr, 1 op/s
Jan 22 14:42:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:25.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:26.389+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:26 compute-2 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:26 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:26 compute-2 sudo[258545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:26 compute-2 sudo[258545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:26 compute-2 sudo[258545]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:26 compute-2 sudo[258570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:26 compute-2 sudo[258570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:26 compute-2 sudo[258570]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:26.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:27.420+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:27 compute-2 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:27 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:27 compute-2 ceph-mon[77081]: pgmap v2241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 22 14:42:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:27.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 22 14:42:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:28.389+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:28 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:28 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:28 compute-2 ceph-mon[77081]: osdmap e161: 3 total, 3 up, 3 in
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.576210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948576270, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1226, "num_deletes": 252, "total_data_size": 2044325, "memory_usage": 2076624, "flush_reason": "Manual Compaction"}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948591645, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1341494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65982, "largest_seqno": 67203, "table_properties": {"data_size": 1336559, "index_size": 2266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13171, "raw_average_key_size": 20, "raw_value_size": 1325558, "raw_average_value_size": 2097, "num_data_blocks": 98, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092877, "oldest_key_time": 1769092877, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 15500 microseconds, and 7856 cpu microseconds.
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591707) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1341494 bytes OK
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591734) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.593812) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.593837) EVENT_LOG_v1 {"time_micros": 1769092948593830, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.593860) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2038299, prev total WAL file size 2038299, number of live WAL files 2.
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.595052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1310KB)], [132(9451KB)]
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948595115, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 11020302, "oldest_snapshot_seqno": -1}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 11179 keys, 9368745 bytes, temperature: kUnknown
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948691206, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 9368745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9308330, "index_size": 31374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 304151, "raw_average_key_size": 27, "raw_value_size": 9118696, "raw_average_value_size": 815, "num_data_blocks": 1161, "num_entries": 11179, "num_filter_entries": 11179, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.691620) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9368745 bytes
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.693238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.5 rd, 97.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 11700, records dropped: 521 output_compression: NoCompression
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.693268) EVENT_LOG_v1 {"time_micros": 1769092948693252, "job": 84, "event": "compaction_finished", "compaction_time_micros": 96249, "compaction_time_cpu_micros": 45965, "output_level": 6, "num_output_files": 1, "total_output_size": 9368745, "num_input_records": 11700, "num_output_records": 11179, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948693775, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948696808, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.594986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:42:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:29 compute-2 podman[258597]: 2026-01-22 14:42:29.087833714 +0000 UTC m=+0.134796128 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 14:42:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:29.419+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:30 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:30 compute-2 ceph-mon[77081]: pgmap v2243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 14:42:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:30.419+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:30 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:42:30.495 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:42:30 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:42:30.501 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:42:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:31 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:31 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:42:31 compute-2 ceph-mon[77081]: pgmap v2244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 22 14:42:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:31.459+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:32 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:32.496+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:33 compute-2 ceph-mon[77081]: pgmap v2245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 22 14:42:33 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:33 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:33.495+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:33 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:42:33.504 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:42:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:33.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:34 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:34.541+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:35 compute-2 ceph-mon[77081]: pgmap v2246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 23 op/s
Jan 22 14:42:35 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:35.572+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:35.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:36 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:36.614+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:37 compute-2 ceph-mon[77081]: pgmap v2247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:37 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:37.574+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:37.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:38 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:38 compute-2 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:38.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:39.573+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:39 compute-2 ceph-mon[77081]: pgmap v2248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:39 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:40.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:40 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:40 compute-2 ceph-mon[77081]: pgmap v2249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:40.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:41.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:41 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:42.621+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:42 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:42 compute-2 ceph-mon[77081]: pgmap v2250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:43.604+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:43 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:43 compute-2 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:44.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:44 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:44 compute-2 ceph-mon[77081]: pgmap v2251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:44.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:45.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:45 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:46.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:46 compute-2 sudo[258632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:46 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:46 compute-2 ceph-mon[77081]: pgmap v2252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:46 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:46 compute-2 sudo[258632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:46 compute-2 sudo[258632]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:46.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:46 compute-2 sudo[258658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:42:46 compute-2 sudo[258658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:42:46 compute-2 sudo[258658]: pam_unix(sudo:session): session closed for user root
Jan 22 14:42:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:42:47.219 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:42:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:42:47.220 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:42:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:42:47.220 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:42:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:42:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:42:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:47.690+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:47 compute-2 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:47 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:48 compute-2 podman[258683]: 2026-01-22 14:42:48.035291895 +0000 UTC m=+0.090212452 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 14:42:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:48.675+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:48 compute-2 ceph-mon[77081]: pgmap v2253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:48 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:49.691+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:49 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:50.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:50.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:50 compute-2 ceph-mon[77081]: pgmap v2254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:50 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:51.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:51.668+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:51 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:52.675+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:52.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:52 compute-2 ceph-mon[77081]: pgmap v2255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:52 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:52 compute-2 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:53.708+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:53 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:54.662+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:54 compute-2 ceph-mon[77081]: pgmap v2256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:54 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:42:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:55.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:42:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:55.687+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:55 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:56.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:56.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:56 compute-2 ceph-mon[77081]: pgmap v2257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:56 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:42:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:42:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:57.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:42:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:57.681+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:57 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:57 compute-2 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:42:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:58.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:58.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:58 compute-2 ceph-mon[77081]: pgmap v2258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:42:58 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:42:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:42:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:42:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:42:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:59.695+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:42:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:42:59 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:00 compute-2 podman[258710]: 2026-01-22 14:43:00.095796801 +0000 UTC m=+0.133667169 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:43:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:00.731+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:00 compute-2 ceph-mon[77081]: pgmap v2259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:00 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:01.750+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:02.769+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:03 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:03 compute-2 ceph-mon[77081]: pgmap v2260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:03 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:03.723+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:04 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:04 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:04.690+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:05 compute-2 ceph-mon[77081]: pgmap v2261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:05 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:05.679+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:06 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:06 compute-2 ceph-mon[77081]: pgmap v2262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:06.680+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:06 compute-2 sudo[258742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:06 compute-2 sudo[258742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:06 compute-2 sudo[258742]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:07 compute-2 sudo[258767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:07 compute-2 sudo[258767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:07 compute-2 sudo[258767]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:07 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:07.692+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:08 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:08 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:08 compute-2 ceph-mon[77081]: pgmap v2263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:08.648+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:09.603+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:09 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:10.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:10.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:10 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:10 compute-2 ceph-mon[77081]: pgmap v2264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:11.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:11.650+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:11 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:11 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:12 compute-2 sudo[258794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:12 compute-2 sudo[258794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-2 sudo[258794]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:12 compute-2 sudo[258819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:43:12 compute-2 sudo[258819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-2 sudo[258819]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:12 compute-2 sudo[258844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:12 compute-2 sudo[258844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-2 sudo[258844]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:12.654+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:12 compute-2 sudo[258869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:43:12 compute-2 sudo[258869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:12.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:13 compute-2 ceph-mon[77081]: pgmap v2265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:13 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:13 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:13 compute-2 sudo[258869]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:13.682+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:43:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:43:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:43:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:43:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:43:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:43:14 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:14.633+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:15 compute-2 ceph-mon[77081]: pgmap v2266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:15 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:15.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:16 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:16.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:16.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:17.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:17 compute-2 ceph-mon[77081]: pgmap v2267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:17 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:43:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:43:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:43:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:43:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:18.663+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:19 compute-2 podman[258930]: 2026-01-22 14:43:19.022482657 +0000 UTC m=+0.077615615 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 14:43:19 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:19 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:43:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:43:19 compute-2 ceph-mon[77081]: pgmap v2268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:19 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:19.664+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:20 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:20.656+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:20.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:21 compute-2 ceph-mon[77081]: pgmap v2269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:21 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:21.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:22 compute-2 sudo[258950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:22 compute-2 sudo[258950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:22 compute-2 sudo[258950]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:22 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:43:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:43:22 compute-2 sudo[258975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:43:22 compute-2 sudo[258975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:22 compute-2 sudo[258975]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:22.692+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:22.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:43:23.040 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:43:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:43:23.042 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:43:23 compute-2 ceph-mon[77081]: pgmap v2270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:23 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:23 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:23.740+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:24 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:24 compute-2 ceph-mon[77081]: pgmap v2271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:24.721+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:24.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:25.734+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:25 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:26.783+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:26 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:26 compute-2 ceph-mon[77081]: pgmap v2272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:43:27.044 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:43:27 compute-2 sudo[259003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:27 compute-2 sudo[259003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:27 compute-2 sudo[259003]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:27 compute-2 sudo[259028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:27 compute-2 sudo[259028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:27 compute-2 sudo[259028]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:27.777+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:27 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:27 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:28.822+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:29 compute-2 ceph-mon[77081]: pgmap v2273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:29.859+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:30 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:30 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:30.898+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:31 compute-2 podman[259055]: 2026-01-22 14:43:31.044734907 +0000 UTC m=+0.106177411 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:43:31 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:31 compute-2 ceph-mon[77081]: pgmap v2274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:31.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:31.921+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:32 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:32.933+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:33 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:33 compute-2 ceph-mon[77081]: pgmap v2275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:33 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:33.982+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:34 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:34.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:34.981+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:35 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:35 compute-2 ceph-mon[77081]: pgmap v2276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:36.008+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:36 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:37.027+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:37 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:37 compute-2 ceph-mon[77081]: pgmap v2277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:38.016+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:38 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:38 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:39.044+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:39 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:39 compute-2 ceph-mon[77081]: pgmap v2278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:39 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:40.054+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:40 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:40 compute-2 ceph-mon[77081]: pgmap v2279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:41.015+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:42.014+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:42 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:43.053+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:43 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:43 compute-2 ceph-mon[77081]: pgmap v2280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:43 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:43.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:44.087+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:44 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:45.117+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:45 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:45 compute-2 ceph-mon[77081]: pgmap v2281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:46.162+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:46 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:47.148+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:43:47.220 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:43:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:43:47.221 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:43:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:43:47.221 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:43:47 compute-2 sudo[259090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:47 compute-2 sudo[259090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:47 compute-2 sudo[259090]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:47 compute-2 sudo[259115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:43:47 compute-2 sudo[259115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:43:47 compute-2 sudo[259115]: pam_unix(sudo:session): session closed for user root
Jan 22 14:43:47 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:47 compute-2 ceph-mon[77081]: pgmap v2282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:48.172+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:48 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:48 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:49.199+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:49 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:49 compute-2 ceph-mon[77081]: pgmap v2283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:49.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:50 compute-2 podman[259141]: 2026-01-22 14:43:50.016703531 +0000 UTC m=+0.070138437 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:43:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:50.151+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:50 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:51.147+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:51 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:51 compute-2 ceph-mon[77081]: pgmap v2284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:52.158+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:52 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:53.179+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:53 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:53 compute-2 ceph-mon[77081]: pgmap v2285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:53 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4022 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:54.190+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:54 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:55.206+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:55 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:55 compute-2 ceph-mon[77081]: pgmap v2286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:43:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:43:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:56.186+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:56 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:43:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:43:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:57.227+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:57 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:57 compute-2 ceph-mon[77081]: pgmap v2287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:58.272+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:58 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:58 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:43:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:59.320+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:43:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:43:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:43:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:43:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:59.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:43:59 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:43:59 compute-2 ceph-mon[77081]: pgmap v2288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:43:59 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:00.321+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:00 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:00 compute-2 ceph-mon[77081]: pgmap v2289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:01.291+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:01 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:02 compute-2 podman[259167]: 2026-01-22 14:44:02.09223525 +0000 UTC m=+0.141245620 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 14:44:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:02.324+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:03 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:03 compute-2 ceph-mon[77081]: pgmap v2290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:03 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:03.352+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:03.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:04.348+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:04 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:05.299+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:05 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:05 compute-2 ceph-mon[77081]: pgmap v2291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:06.265+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:06 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:06 compute-2 ceph-mon[77081]: pgmap v2292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:07.290+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:07 compute-2 sudo[259198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:07 compute-2 sudo[259198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:07 compute-2 sudo[259198]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:07 compute-2 sudo[259223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:07 compute-2 sudo[259223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:07 compute-2 sudo[259223]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:07 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:08.278+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:08 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:08 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:08 compute-2 ceph-mon[77081]: pgmap v2293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:09.268+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:09 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:10.229+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:10 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:10 compute-2 ceph-mon[77081]: pgmap v2294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:11.258+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:11.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:11 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:12.293+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:12 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:12 compute-2 ceph-mon[77081]: pgmap v2295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:13.330+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:13.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:13 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:13 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:14.375+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:14 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:14 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:14 compute-2 ceph-mon[77081]: pgmap v2296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:14 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:14.902 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:44:14 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:14.904 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:44:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:15.400+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:15 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:16.370+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:17 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:17 compute-2 ceph-mon[77081]: pgmap v2297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:17.328+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:17 compute-2 sshd-session[259253]: Invalid user ubuntu from 45.148.10.240 port 40910
Jan 22 14:44:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:17.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:17 compute-2 sshd-session[259253]: Connection closed by invalid user ubuntu 45.148.10.240 port 40910 [preauth]
Jan 22 14:44:18 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:18 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4047 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:18.350+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:19.393+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:44:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2737044789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:44:19 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:19 compute-2 ceph-mon[77081]: pgmap v2298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:20.424+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:20 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:20 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:20.907 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:44:21 compute-2 podman[259257]: 2026-01-22 14:44:21.029194507 +0000 UTC m=+0.078707404 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:44:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:21.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:21.444+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:21 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:21 compute-2 ceph-mon[77081]: pgmap v2299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:21 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:22.491+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:22 compute-2 sudo[259278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:22 compute-2 sudo[259278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-2 sudo[259278]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:22 compute-2 sudo[259303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:44:22 compute-2 sudo[259303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-2 sudo[259303]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:22 compute-2 sudo[259328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:22 compute-2 sudo[259328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-2 sudo[259328]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:22 compute-2 sudo[259354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:44:22 compute-2 sudo[259354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:22 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:22 compute-2 ceph-mon[77081]: pgmap v2300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:23.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:23 compute-2 sudo[259354]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:23.510+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:24 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4052 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:24 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:24.526+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:25 compute-2 ceph-mon[77081]: pgmap v2301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:25 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:25.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:44:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:44:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:26.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:26 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:26 compute-2 ceph-mon[77081]: pgmap v2302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:44:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:44:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:27.553+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:27 compute-2 sudo[259412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:27.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:27 compute-2 sudo[259412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:27 compute-2 sudo[259412]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:27 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:27 compute-2 sudo[259437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:27 compute-2 sudo[259437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:27 compute-2 sudo[259437]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:28.553+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:28 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:28 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:28 compute-2 ceph-mon[77081]: pgmap v2303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:28 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:29.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:29 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:30.584+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:30 compute-2 ceph-mon[77081]: pgmap v2304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:30 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:31.558+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:31.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:31 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:32.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:32 compute-2 ceph-mon[77081]: pgmap v2305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:32 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:33 compute-2 podman[259465]: 2026-01-22 14:44:33.06859329 +0000 UTC m=+0.121336202 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 14:44:33 compute-2 sudo[259492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:33 compute-2 sudo[259492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:33 compute-2 sudo[259492]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:33.634+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:33 compute-2 sudo[259517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:44:33 compute-2 sudo[259517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:33 compute-2 sudo[259517]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:33.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:33 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:44:33 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:34.621+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:34 compute-2 ceph-mon[77081]: pgmap v2306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:34 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:35.635+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:35 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:36.586+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:37 compute-2 ceph-mon[77081]: pgmap v2307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:37 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:37.559+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:38 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:38 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:38.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:39 compute-2 ceph-mon[77081]: pgmap v2308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:44:39 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:39.505+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:40 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:40.528+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:41 compute-2 ceph-mon[77081]: pgmap v2309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 14:44:41 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:41.500+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:42.495+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:42 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:43.534+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:43 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:43 compute-2 ceph-mon[77081]: pgmap v2310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 14:44:43 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:44.504+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:44 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:44 compute-2 ceph-mon[77081]: pgmap v2311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 694 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 614 KiB/s wr, 13 op/s
Jan 22 14:44:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:45.554+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:45 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:45 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:46.544+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:46 compute-2 ceph-mon[77081]: pgmap v2312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 14:44:46 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:47.221 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:44:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:44:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:44:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:47.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:47.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:47 compute-2 sudo[259549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:47 compute-2 sudo[259549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:47 compute-2 sudo[259549]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:47 compute-2 sudo[259574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:44:47 compute-2 sudo[259574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:44:47 compute-2 sudo[259574]: pam_unix(sudo:session): session closed for user root
Jan 22 14:44:48 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:48 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:48.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:49 compute-2 ceph-mon[77081]: pgmap v2313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 14:44:49 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:49.568+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:49.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:50 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:50.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:51 compute-2 ceph-mon[77081]: pgmap v2314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 14:44:51 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:44:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3107862613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:44:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:51.558+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:51.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:52 compute-2 podman[259601]: 2026-01-22 14:44:52.031958307 +0000 UTC m=+0.089706515 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:44:52 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:52.521+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:53 compute-2 ceph-mon[77081]: pgmap v2315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 22 14:44:53 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:53 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:53.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:53.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:54 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:54.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:55.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:55 compute-2 ceph-mon[77081]: pgmap v2316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 711 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 22 14:44:55 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:44:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:55.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:44:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:56.424 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:44:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:44:56.427 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:44:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:56.530+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:56 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:57.554+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:57 compute-2 ceph-mon[77081]: pgmap v2317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 44 op/s
Jan 22 14:44:57 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:58.531+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:58 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:58 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:44:58 compute-2 ceph-mon[77081]: pgmap v2318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:44:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:44:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:59.565+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:44:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:44:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:44:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:44:59 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:44:59 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:00.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:00 compute-2 ceph-mon[77081]: pgmap v2319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:45:00 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:01 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:45:01.429 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:45:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:01.567+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:01 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:02.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:02 compute-2 ceph-mon[77081]: pgmap v2320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:45:02 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:02 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:03.570+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:03.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:04 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:04 compute-2 podman[259627]: 2026-01-22 14:45:04.07438879 +0000 UTC m=+0.127965398 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Jan 22 14:45:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:04.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:05 compute-2 ceph-mon[77081]: pgmap v2321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 14:45:05 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:05.501+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:06 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:06.481+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:07 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:07 compute-2 ceph-mon[77081]: pgmap v2322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 341 B/s wr, 13 op/s
Jan 22 14:45:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:07.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.916057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107916095, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2312, "num_deletes": 257, "total_data_size": 4499819, "memory_usage": 4577656, "flush_reason": "Manual Compaction"}
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107937789, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 2933109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67208, "largest_seqno": 69515, "table_properties": {"data_size": 2924499, "index_size": 4911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21758, "raw_average_key_size": 21, "raw_value_size": 2905497, "raw_average_value_size": 2807, "num_data_blocks": 213, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092948, "oldest_key_time": 1769092948, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 21871 microseconds, and 10824 cpu microseconds.
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.937925) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 2933109 bytes OK
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.937973) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.940004) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.940018) EVENT_LOG_v1 {"time_micros": 1769093107940012, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.940037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 4489306, prev total WAL file size 4489306, number of live WAL files 2.
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.941367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(2864KB)], [135(9149KB)]
Jan 22 14:45:07 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107941418, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 12301854, "oldest_snapshot_seqno": -1}
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 11687 keys, 12155168 bytes, temperature: kUnknown
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108023264, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12155168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089150, "index_size": 35697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 316696, "raw_average_key_size": 27, "raw_value_size": 11888244, "raw_average_value_size": 1017, "num_data_blocks": 1339, "num_entries": 11687, "num_filter_entries": 11687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.023616) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12155168 bytes
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.025290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 148.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 12214, records dropped: 527 output_compression: NoCompression
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.025337) EVENT_LOG_v1 {"time_micros": 1769093108025303, "job": 86, "event": "compaction_finished", "compaction_time_micros": 81777, "compaction_time_cpu_micros": 42759, "output_level": 6, "num_output_files": 1, "total_output_size": 12155168, "num_input_records": 12214, "num_output_records": 11687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108026084, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108028109, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.941212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:08 compute-2 sudo[259657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:08 compute-2 sudo[259657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:08 compute-2 sudo[259657]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:08 compute-2 sudo[259682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:08 compute-2 sudo[259682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:08 compute-2 sudo[259682]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:08 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:08 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:08.552+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:09 compute-2 ceph-mon[77081]: pgmap v2323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:09 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:09.521+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:09.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:10 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:10.556+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:11.507+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:11 compute-2 ceph-mon[77081]: pgmap v2324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:11 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:11.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:12.500+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:12 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:12 compute-2 ceph-mon[77081]: pgmap v2325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:13.509+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:13 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 14:45:13 compute-2 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:13 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:14.479+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:14 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:14 compute-2 ceph-mon[77081]: pgmap v2326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:15.434+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:15.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:15 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:16.441+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:17 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:17 compute-2 ceph-mon[77081]: pgmap v2327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:17.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:17.482+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:18 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:18 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4107 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:18.490+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.713681) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118713768, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 251, "total_data_size": 292936, "memory_usage": 300376, "flush_reason": "Manual Compaction"}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118717671, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 192034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69520, "largest_seqno": 69904, "table_properties": {"data_size": 189789, "index_size": 344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5891, "raw_average_key_size": 18, "raw_value_size": 185289, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093108, "oldest_key_time": 1769093108, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 4012 microseconds, and 1531 cpu microseconds.
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717712) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 192034 bytes OK
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717731) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719588) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719606) EVENT_LOG_v1 {"time_micros": 1769093118719599, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 290358, prev total WAL file size 290358, number of live WAL files 2.
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.720071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(187KB)], [138(11MB)]
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118720114, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 12347202, "oldest_snapshot_seqno": -1}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 11487 keys, 10715069 bytes, temperature: kUnknown
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118781781, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10715069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10651467, "index_size": 33793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28741, "raw_key_size": 313309, "raw_average_key_size": 27, "raw_value_size": 10455002, "raw_average_value_size": 910, "num_data_blocks": 1254, "num_entries": 11487, "num_filter_entries": 11487, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.782993) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10715069 bytes
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.784389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.8 rd, 173.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(120.1) write-amplify(55.8) OK, records in: 11998, records dropped: 511 output_compression: NoCompression
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.784410) EVENT_LOG_v1 {"time_micros": 1769093118784400, "job": 88, "event": "compaction_finished", "compaction_time_micros": 61788, "compaction_time_cpu_micros": 27782, "output_level": 6, "num_output_files": 1, "total_output_size": 10715069, "num_input_records": 11998, "num_output_records": 11487, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118784575, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118787342, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:18 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:45:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:19.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:45:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2689568655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:45:19 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:19 compute-2 ceph-mon[77081]: pgmap v2328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:19.473+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:19.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:20 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:20.476+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:21 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:21 compute-2 ceph-mon[77081]: pgmap v2329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:21.470+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:22 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:22.496+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:23 compute-2 podman[259715]: 2026-01-22 14:45:23.041380851 +0000 UTC m=+0.083956703 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:45:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:23 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:23 compute-2 ceph-mon[77081]: pgmap v2330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:23 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4112 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:23.545+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:23.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:24 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:24.556+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:25 compute-2 ceph-mon[77081]: pgmap v2331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:25 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:25.604+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:25.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:26 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:26.632+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:27 compute-2 ceph-mon[77081]: pgmap v2332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:27 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:27.670+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:28 compute-2 sudo[259738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:28 compute-2 sudo[259738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:28 compute-2 sudo[259738]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:28 compute-2 sudo[259763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:28 compute-2 sudo[259763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:28 compute-2 sudo[259763]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:28 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:28 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:28.665+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:45:28 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 70K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1902 writes, 9925 keys, 1902 commit groups, 1.0 writes per commit group, ingest: 16.49 MB, 0.03 MB/s
                                           Interval WAL: 1902 writes, 1902 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     81.8      0.93              0.28        44    0.021       0      0       0.0       0.0
                                             L6      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.2    137.9    118.3      3.36              1.27        43    0.078    364K    23K       0.0       0.0
                                            Sum      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.2    108.0    110.4      4.29              1.55        87    0.049    364K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    114.1    116.1      0.82              0.42        16    0.051     92K   4158       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    137.9    118.3      3.36              1.27        43    0.078    364K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     82.1      0.93              0.28        43    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.46 GB write, 0.11 MB/s write, 0.45 GB read, 0.11 MB/s read, 4.3 seconds
                                           Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 50.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000539 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2667,48.19 MB,15.8516%) FilterBlock(87,1018.30 KB,0.327115%) IndexBlock(87,1.34 MB,0.440181%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:45:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:29.673+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:29.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:30 compute-2 ceph-mon[77081]: pgmap v2333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:30 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:30.702+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-2 ceph-mon[77081]: pgmap v2334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:31 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:31.655+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:32 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:32.644+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:33 compute-2 ceph-mon[77081]: pgmap v2335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:33 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:33 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:33.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:33 compute-2 sudo[259791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:33 compute-2 sudo[259791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:33 compute-2 sudo[259791]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:33 compute-2 sudo[259816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:45:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:33 compute-2 sudo[259816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:33 compute-2 sudo[259816]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:33 compute-2 sudo[259841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:33 compute-2 sudo[259841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:33 compute-2 sudo[259841]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:33 compute-2 sudo[259866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:45:33 compute-2 sudo[259866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:34 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:34 compute-2 sudo[259866]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:34.662+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:35 compute-2 podman[259923]: 2026-01-22 14:45:35.035903676 +0000 UTC m=+0.090032994 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:45:35 compute-2 ceph-mon[77081]: pgmap v2336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:35 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:35.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:35.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:36 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:45:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:45:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:45:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:45:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:45:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:36.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:37 compute-2 ceph-mon[77081]: pgmap v2337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:37 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:37.722+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:38 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:38 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:38.748+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:39 compute-2 ceph-mon[77081]: pgmap v2338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:39 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:39.713+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:40 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:40.691+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:41 compute-2 ceph-mon[77081]: pgmap v2339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:41 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:41.647+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:42 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:42 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:45:42 compute-2 sudo[259952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:42 compute-2 sudo[259952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:42 compute-2 sudo[259952]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:42 compute-2 sudo[259977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:45:42 compute-2 sudo[259977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:42 compute-2 sudo[259977]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:42.643+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:43 compute-2 ceph-mon[77081]: pgmap v2340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:43 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 14:45:43 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:43.608+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:44 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:44.630+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:45 compute-2 ceph-mon[77081]: pgmap v2341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:45 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:45.678+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:46 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:46.644+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:45:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:45:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:45:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:45:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:45:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:45:47 compute-2 ceph-mon[77081]: pgmap v2342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:47 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:47.633+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:48 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:48 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:48 compute-2 sudo[260005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:48 compute-2 sudo[260005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:48 compute-2 sudo[260005]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:48 compute-2 sudo[260030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:45:48 compute-2 sudo[260030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:45:48 compute-2 sudo[260030]: pam_unix(sudo:session): session closed for user root
Jan 22 14:45:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:48.586+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:49 compute-2 ceph-mon[77081]: pgmap v2343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:49 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:49.596+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:49.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:50 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:50.554+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:51 compute-2 ceph-mon[77081]: pgmap v2344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:51 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:51.552+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:51.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:52 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:52.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:45:53 compute-2 ceph-mon[77081]: pgmap v2345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:53 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:53 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:53.637+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:53.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:54 compute-2 podman[260058]: 2026-01-22 14:45:54.044492431 +0000 UTC m=+0.097272306 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:45:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:54 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:54.602+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:55 compute-2 ceph-mon[77081]: pgmap v2346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:55 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:55.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:55.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:56 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:56.588+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:45:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:45:57 compute-2 ceph-mon[77081]: pgmap v2347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:57 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:57.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:45:57.704 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:45:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:45:57.705 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:45:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:57.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:58 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:58 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:45:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:58.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:45:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:59.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:45:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:45:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:59.562+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:45:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:45:59 compute-2 ceph-mon[77081]: pgmap v2348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:45:59 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 14:45:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:45:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:45:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:00 compute-2 ovn_controller[133156]: 2026-01-22T14:46:00Z|00077|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 14:46:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:00.606+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:00 compute-2 ceph-mon[77081]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:00 compute-2 ceph-mon[77081]: pgmap v2349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:01.631+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:01 compute-2 ceph-mon[77081]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:01 compute-2 ceph-mon[77081]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 14:46:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:02.599+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:02 compute-2 ceph-mon[77081]: pgmap v2350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:02 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:03.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:03.555+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:03 compute-2 ceph-mon[77081]: Health check update: 46 slow ops, oldest one blocked for 4153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:03 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:04.530+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:04 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:04 compute-2 ceph-mon[77081]: pgmap v2351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:05.542+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:05 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:46:05.707 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:46:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:05.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:05 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:06 compute-2 podman[260083]: 2026-01-22 14:46:06.08150313 +0000 UTC m=+0.135555129 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:46:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:06.589+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:07 compute-2 ceph-mon[77081]: pgmap v2352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:07 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:07.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:07.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:08 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:08 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:08 compute-2 sudo[260110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:08 compute-2 sudo[260110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:08 compute-2 sudo[260110]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:08.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:08 compute-2 sudo[260135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:08 compute-2 sudo[260135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:08 compute-2 sudo[260135]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:09 compute-2 ceph-mon[77081]: pgmap v2353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:09 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:09.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:09.604+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:10 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:10.606+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:11 compute-2 ceph-mon[77081]: pgmap v2354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:11 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:11.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:11.642+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:12 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:12.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:13 compute-2 ceph-mon[77081]: pgmap v2355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:13 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:13 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:13.559+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:13.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:14 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:14.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:15 compute-2 ceph-mon[77081]: pgmap v2356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:15 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:15.638+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:16 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:16.663+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:17 compute-2 ceph-mon[77081]: pgmap v2357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:17 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:17.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:17.625+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:18 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:18 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:46:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:46:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:46:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:46:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:18.619+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:46:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:46:19 compute-2 ceph-mon[77081]: pgmap v2358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 580 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:19 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:19.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:20 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:20.696+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:21.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:21 compute-2 ceph-mon[77081]: pgmap v2359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:21 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:21.702+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:22 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:22.685+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:23 compute-2 ceph-mon[77081]: pgmap v2360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:23 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:23 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:23.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:24 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:24.666+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:25 compute-2 podman[260169]: 2026-01-22 14:46:25.034124703 +0000 UTC m=+0.083560992 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:46:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:25 compute-2 ceph-mon[77081]: pgmap v2361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:25 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:25.642+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:26 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:26.597+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:27 compute-2 ceph-mon[77081]: pgmap v2362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:27 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:27.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:27.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:28 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:28 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:28.639+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:28 compute-2 sudo[260189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:28 compute-2 sudo[260189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:28 compute-2 sudo[260189]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:28 compute-2 sudo[260215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:28 compute-2 sudo[260215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:28 compute-2 sudo[260215]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:29 compute-2 ceph-mon[77081]: pgmap v2363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:29 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:29.635+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:30 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:30.645+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:46:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.5 total, 600.0 interval
                                           Cumulative writes: 10K writes, 36K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2992 syncs, 3.43 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 952 writes, 1811 keys, 952 commit groups, 1.0 writes per commit group, ingest: 0.82 MB, 0.00 MB/s
                                           Interval WAL: 952 writes, 454 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:46:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:31 compute-2 ceph-mon[77081]: pgmap v2364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 22 14:46:31 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:31.606+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 14:46:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:32 compute-2 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 14:46:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:32.578+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:33 compute-2 sshd-session[260241]: Invalid user ubuntu from 45.148.10.240 port 43628
Jan 22 14:46:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:33.591+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:33 compute-2 sshd-session[260241]: Connection closed by invalid user ubuntu 45.148.10.240 port 43628 [preauth]
Jan 22 14:46:33 compute-2 ceph-mon[77081]: pgmap v2365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:33 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:33 compute-2 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:34.567+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:34 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:34 compute-2 ceph-mon[77081]: pgmap v2366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 22 14:46:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:35.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:35 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:36.550+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:36 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:36 compute-2 ceph-mon[77081]: pgmap v2367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:37 compute-2 podman[260246]: 2026-01-22 14:46:37.060872441 +0000 UTC m=+0.115839797 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:46:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:37.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:37.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:37 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:37.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:38.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:38 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:38 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:38 compute-2 ceph-mon[77081]: pgmap v2368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:39.653+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:39 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:39.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:40.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:40 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:40 compute-2 ceph-mon[77081]: pgmap v2369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:41.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:41.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:41 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:41 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:41.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:42 compute-2 sudo[260273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:42 compute-2 sudo[260273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-2 sudo[260273]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:42 compute-2 sudo[260298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:46:42 compute-2 sudo[260298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-2 sudo[260298]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:42 compute-2 sudo[260323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:42 compute-2 sudo[260323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-2 sudo[260323]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:42.722+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:42 compute-2 sudo[260348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:46:42 compute-2 sudo[260348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:42 compute-2 ceph-mon[77081]: pgmap v2370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:42 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:43 compute-2 sudo[260348]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:43.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:43.714+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:43 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:43 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:46:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:46:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:44.750+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:46:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:46:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:46:44 compute-2 ceph-mon[77081]: pgmap v2371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 14:46:44 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:45.780+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:45 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:46.784+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:46 compute-2 ceph-mon[77081]: pgmap v2372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 8 op/s
Jan 22 14:46:46 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:46:47.224 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:46:47.224 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:46:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:46:47.224 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:46:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:47.798+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:47.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:47 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:47 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:48.835+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:48 compute-2 ceph-mon[77081]: pgmap v2373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:49 compute-2 sudo[260408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:49 compute-2 sudo[260408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:49 compute-2 sudo[260408]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:46:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:46:49 compute-2 sudo[260433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:49 compute-2 sudo[260433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:49 compute-2 sudo[260433]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:49.785+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:50 compute-2 sudo[260458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:46:50 compute-2 sudo[260458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:50 compute-2 sudo[260458]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:50 compute-2 sudo[260483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:46:50 compute-2 sudo[260483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:46:50 compute-2 sudo[260483]: pam_unix(sudo:session): session closed for user root
Jan 22 14:46:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:50.749+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:50 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:50 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:46:50 compute-2 ceph-mon[77081]: pgmap v2374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:51.761+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:51 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:52.799+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:52 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:52 compute-2 ceph-mon[77081]: pgmap v2375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:52 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:53.786+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:53 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:53 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:54.797+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:54 compute-2 ceph-mon[77081]: pgmap v2376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:54 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:55.766+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:55 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:56 compute-2 podman[260511]: 2026-01-22 14:46:56.015048344 +0000 UTC m=+0.064947160 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:46:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:56.764+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:56 compute-2 ceph-mon[77081]: pgmap v2377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:56 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:57.739+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:57 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:58.737+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:58 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:46:58 compute-2 ceph-mon[77081]: pgmap v2378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:46:58 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:46:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:46:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:46:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:59.777+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:46:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:46:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:46:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:46:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:46:59 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:00.740+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:00 compute-2 ceph-mon[77081]: pgmap v2379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 14:47:00 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:01.725+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:01 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:47:01.874 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:47:01 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:47:01.875 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:47:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:02 compute-2 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:47:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:02.713+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:02 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:47:02.879 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:47:03 compute-2 ceph-mon[77081]: pgmap v2380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 22 14:47:03 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:03 compute-2 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:03.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:03.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:04 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:04.684+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:05 compute-2 ceph-mon[77081]: pgmap v2381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 22 14:47:05 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:05.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:06.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:06 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:07.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:07 compute-2 ceph-mon[77081]: pgmap v2382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 14:47:07 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:08 compute-2 podman[260537]: 2026-01-22 14:47:08.057670382 +0000 UTC m=+0.107952628 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:47:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:08.691+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:08 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:08 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:08 compute-2 ceph-mon[77081]: pgmap v2383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 14:47:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:09 compute-2 sudo[260565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:09 compute-2 sudo[260565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:09 compute-2 sudo[260565]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:09 compute-2 sudo[260590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:09 compute-2 sudo[260590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:09 compute-2 sudo[260590]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:09.658+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:09 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:10.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:10 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:10 compute-2 ceph-mon[77081]: pgmap v2384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 14:47:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:11.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:11 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:12.726+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:12 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:12 compute-2 ceph-mon[77081]: pgmap v2385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:47:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:13.733+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:13 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:13 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:13 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:14.692+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:14 compute-2 ceph-mon[77081]: pgmap v2386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 22 14:47:14 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:15.693+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:15.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:16 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:16.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:17 compute-2 sshd-session[260619]: Connection closed by 54.89.106.110 port 47390 [preauth]
Jan 22 14:47:17 compute-2 ceph-mon[77081]: pgmap v2387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Jan 22 14:47:17 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:17.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:18 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:18 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:18.605+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:47:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2772379494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:47:19 compute-2 ceph-mon[77081]: pgmap v2388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:19 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:19.619+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:19.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:20 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:20.647+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:21 compute-2 ceph-mon[77081]: pgmap v2389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:21 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:21.684+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:22 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:22.659+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:23.614+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:23 compute-2 ceph-mon[77081]: pgmap v2390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:23 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:23 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:24.638+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:24 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:24 compute-2 ceph-mon[77081]: pgmap v2391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:25.597+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:25 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:25.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:26.626+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:27 compute-2 podman[260626]: 2026-01-22 14:47:27.05278509 +0000 UTC m=+0.096920360 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 14:47:27 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:27 compute-2 ceph-mon[77081]: pgmap v2392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:27.614+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:27.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:28 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:28 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:28 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:28.594+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:29 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:29 compute-2 ceph-mon[77081]: pgmap v2393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:29 compute-2 sudo[260646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:29 compute-2 sudo[260646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:29 compute-2 sudo[260646]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:29.603+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:29 compute-2 sudo[260671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:29 compute-2 sudo[260671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:29 compute-2 sudo[260671]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:30.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:30 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:30.569+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:31.533+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:31 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:31 compute-2 ceph-mon[77081]: pgmap v2394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:32.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:32.540+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:32 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:33.566+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:33 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:33 compute-2 ceph-mon[77081]: pgmap v2395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:33 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:34.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:34.541+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:34 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:35.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:35.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:35 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:35 compute-2 ceph-mon[77081]: pgmap v2396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:36.609+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:36 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:36 compute-2 ceph-mon[77081]: pgmap v2397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:36 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:37.659+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:37 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:38.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:38.676+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:39 compute-2 podman[260701]: 2026-01-22 14:47:39.106498931 +0000 UTC m=+0.161323824 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:47:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:39 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:39 compute-2 ceph-mon[77081]: pgmap v2398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:39 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:39.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:40 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:40.670+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:41.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:41 compute-2 ceph-mon[77081]: pgmap v2399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:41 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:41.720+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:42 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:42.710+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:43.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:43 compute-2 ceph-mon[77081]: pgmap v2400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:43 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:43 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:43.681+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:44 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:44.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:45 compute-2 ceph-mon[77081]: pgmap v2401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:45 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:45.626+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:46.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:46 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:46.652+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:47:47.225 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:47:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:47:47.225 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:47:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:47:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:47:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:47 compute-2 ceph-mon[77081]: pgmap v2402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:47 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:47.696+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:48 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:48 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:48.712+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:49 compute-2 ceph-mon[77081]: pgmap v2403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:49 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:49.704+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:49 compute-2 sudo[260734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:49 compute-2 sudo[260734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:49 compute-2 sudo[260734]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:49 compute-2 sudo[260759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:49 compute-2 sudo[260759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:49 compute-2 sudo[260759]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:50 compute-2 sudo[260784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:50 compute-2 sudo[260784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-2 sudo[260784]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-2 sudo[260809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:47:50 compute-2 sudo[260809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-2 sudo[260809]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-2 sudo[260834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:50 compute-2 sudo[260834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-2 sudo[260834]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:50 compute-2 sudo[260859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:47:50 compute-2 sudo[260859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:50 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:50.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:51 compute-2 sudo[260859]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:47:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:51.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:47:51 compute-2 ceph-mon[77081]: pgmap v2404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:51 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:47:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:47:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:51.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:52 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:52 compute-2 ceph-mon[77081]: pgmap v2405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:52.728+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:53.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:53.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:53 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:53 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:54.694+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:55 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:55 compute-2 ceph-mon[77081]: pgmap v2406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:55.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:55.654+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 14:47:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 14:47:56 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:56 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:56.649+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:57 compute-2 ceph-mon[77081]: pgmap v2407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:57 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:57.640+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:57 compute-2 podman[260920]: 2026-01-22 14:47:57.997424707 +0000 UTC m=+0.058904530 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:47:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:47:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:58.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:47:58 compute-2 sudo[260940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:47:58 compute-2 sudo[260940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:58 compute-2 sudo[260940]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:58 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:58 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:47:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:47:58 compute-2 sudo[260965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:47:58 compute-2 sudo[260965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:47:58 compute-2 sudo[260965]: pam_unix(sudo:session): session closed for user root
Jan 22 14:47:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:58.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:47:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:47:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:47:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:47:59 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:59 compute-2 ceph-mon[77081]: pgmap v2408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:47:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:47:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:47:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:59.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:00.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:00.572+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:00 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.878784) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280878818, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2445, "num_deletes": 251, "total_data_size": 4756343, "memory_usage": 4825256, "flush_reason": "Manual Compaction"}
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280897910, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 3081552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69909, "largest_seqno": 72349, "table_properties": {"data_size": 3072496, "index_size": 5229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23031, "raw_average_key_size": 21, "raw_value_size": 3052524, "raw_average_value_size": 2823, "num_data_blocks": 226, "num_entries": 1081, "num_filter_entries": 1081, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093119, "oldest_key_time": 1769093119, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 19197 microseconds, and 6903 cpu microseconds.
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897974) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 3081552 bytes OK
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897999) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.900181) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.900198) EVENT_LOG_v1 {"time_micros": 1769093280900192, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.900234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 4745301, prev total WAL file size 4745301, number of live WAL files 2.
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.901895) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(3009KB)], [141(10MB)]
Jan 22 14:48:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280902007, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 13796621, "oldest_snapshot_seqno": -1}
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 12051 keys, 12161457 bytes, temperature: kUnknown
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281023144, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 12161457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093413, "index_size": 36827, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 326770, "raw_average_key_size": 27, "raw_value_size": 11886173, "raw_average_value_size": 986, "num_data_blocks": 1377, "num_entries": 12051, "num_filter_entries": 12051, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.023476) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12161457 bytes
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.025509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.9 rd, 100.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.2 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 12568, records dropped: 517 output_compression: NoCompression
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.025525) EVENT_LOG_v1 {"time_micros": 1769093281025518, "job": 90, "event": "compaction_finished", "compaction_time_micros": 121180, "compaction_time_cpu_micros": 54985, "output_level": 6, "num_output_files": 1, "total_output_size": 12161457, "num_input_records": 12568, "num_output_records": 12051, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281026687, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281028957, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.901732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:48:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:01 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:01 compute-2 ceph-mon[77081]: pgmap v2409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:01.622+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:02.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:02.616+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:02 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:03.225 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:48:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:03.226 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:48:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:03.588+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:03 compute-2 ceph-mon[77081]: pgmap v2410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:03 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:03 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:04.543+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:04 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:05.526+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:05 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:05 compute-2 ceph-mon[77081]: pgmap v2411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:06.487+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:07 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:07 compute-2 ceph-mon[77081]: pgmap v2412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:07.468+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:08 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:08 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:08 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:08 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:08.228 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:48:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:08.496+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:09 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:09 compute-2 ceph-mon[77081]: pgmap v2413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:09.509+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:09 compute-2 sudo[260996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:09 compute-2 sudo[260996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:09 compute-2 sudo[260996]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:09 compute-2 sudo[261027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:09 compute-2 sudo[261027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:09 compute-2 sudo[261027]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:10.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:10 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:10 compute-2 podman[261020]: 2026-01-22 14:48:10.079183109 +0000 UTC m=+0.155069799 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 14:48:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:10.546+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:11 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:11 compute-2 ceph-mon[77081]: pgmap v2414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:11.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:11.580+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:12.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:12 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:12.613+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:13 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:13 compute-2 ceph-mon[77081]: pgmap v2415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:13 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:13.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:13.622+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:14.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:14 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:14.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:15 compute-2 ceph-mon[77081]: pgmap v2416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:15 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:15.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:15.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:16 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:16.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:17.674+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:17 compute-2 ceph-mon[77081]: pgmap v2417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:17 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:18.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:48:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:48:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:48:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:48:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:18.721+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:19 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:48:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:48:19 compute-2 ceph-mon[77081]: pgmap v2418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:19.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:19.762+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:20.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:20 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:20 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:20.772+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:21.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:21 compute-2 ceph-mon[77081]: pgmap v2419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:21 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:21.800+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:22.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:22 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:22 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:22.833+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:23.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:23 compute-2 ceph-mon[77081]: pgmap v2420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:23.825+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:24.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:24 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:24 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:24 compute-2 ceph-mon[77081]: pgmap v2421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:24.857+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:25 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:25.902+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:26.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:26.861+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:26 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:26 compute-2 ceph-mon[77081]: pgmap v2422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:27.899+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:27 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:27 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:28.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:28.931+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:28 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:28 compute-2 ceph-mon[77081]: pgmap v2423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:28 compute-2 podman[261085]: 2026-01-22 14:48:28.985786461 +0000 UTC m=+0.051150926 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:48:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:29 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:29.974+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:30 compute-2 sudo[261104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:30 compute-2 sudo[261104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:30 compute-2 sudo[261104]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:30 compute-2 sudo[261129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:30 compute-2 sudo[261129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:30 compute-2 sudo[261129]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:30.950+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:30 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:30 compute-2 ceph-mon[77081]: pgmap v2424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:31.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:31.914+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:31 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:32.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:32.953+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:32 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:32 compute-2 ceph-mon[77081]: pgmap v2425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:33.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:33.908+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:34.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:34 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:34 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:34.882+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:35 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:35 compute-2 ceph-mon[77081]: pgmap v2426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:35.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:35.848+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:36 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:36.833+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:37 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:37 compute-2 ceph-mon[77081]: pgmap v2427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:37.854+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:38.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:38.852+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:38 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:38 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:39.816+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:40.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:40 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:40 compute-2 ceph-mon[77081]: pgmap v2428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:40 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:40 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:40.776+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:41 compute-2 podman[261160]: 2026-01-22 14:48:41.073082098 +0000 UTC m=+0.126170439 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 22 14:48:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:41 compute-2 ceph-mon[77081]: pgmap v2429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:41 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:41.743+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:42 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:42.252 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:48:42 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:42.254 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:48:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:42.742+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:43 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:43 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:43.255 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:48:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:43.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:43.735+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:44.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:44 compute-2 ceph-mon[77081]: pgmap v2430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:44 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:44 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:44 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:44.742+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:45 compute-2 ceph-mon[77081]: pgmap v2431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:45 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:45.708+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:46 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:46.749+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:48:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:48:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:48:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:48:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:47.716+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:48.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:48 compute-2 ceph-mon[77081]: pgmap v2432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:48 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:48.710+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:48 compute-2 sshd-session[261189]: Invalid user ubuntu from 45.148.10.240 port 48132
Jan 22 14:48:48 compute-2 sshd-session[261189]: Connection closed by invalid user ubuntu 45.148.10.240 port 48132 [preauth]
Jan 22 14:48:49 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:49 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:49 compute-2 ceph-mon[77081]: pgmap v2433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:49 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:49.664+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:50.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:50 compute-2 sudo[261192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:50 compute-2 sudo[261192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:50 compute-2 sudo[261192]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:50 compute-2 sudo[261217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:50 compute-2 sudo[261217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:50 compute-2 sudo[261217]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:50 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:50.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:51 compute-2 ceph-mon[77081]: pgmap v2434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:51 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:51.678+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:52 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:52.680+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:53 compute-2 ceph-mon[77081]: pgmap v2435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:53 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:53 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:53.670+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:54.677+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:54 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:48:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:48:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:55.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:55 compute-2 ceph-mon[77081]: pgmap v2436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:55 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:48:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:56.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:48:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:56.711+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:56 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:56 compute-2 ceph-mon[77081]: pgmap v2437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:57.714+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:57 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:58.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:58 compute-2 sudo[261246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:58 compute-2 sudo[261246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:58 compute-2 sudo[261246]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:58 compute-2 sudo[261271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:48:58 compute-2 sudo[261271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:58 compute-2 sudo[261271]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:58.709+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:58 compute-2 sudo[261296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:58 compute-2 sudo[261296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:58 compute-2 sudo[261296]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:58 compute-2 sudo[261322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:48:58 compute-2 sudo[261322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:59 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:59 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:48:59 compute-2 ceph-mon[77081]: pgmap v2438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:48:59 compute-2 sudo[261322]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:59 compute-2 podman[261362]: 2026-01-22 14:48:59.195370171 +0000 UTC m=+0.079333707 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 14:48:59 compute-2 sudo[261384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:59 compute-2 sudo[261384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:59 compute-2 sudo[261384]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:59 compute-2 sudo[261409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:48:59 compute-2 sudo[261409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:59 compute-2 sudo[261409]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:48:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:48:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:48:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:48:59 compute-2 sudo[261434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:48:59 compute-2 sudo[261434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:59 compute-2 sudo[261434]: pam_unix(sudo:session): session closed for user root
Jan 22 14:48:59 compute-2 sudo[261459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:48:59 compute-2 sudo[261459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:48:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:48:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:48:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:59.704+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:00.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:00 compute-2 podman[261556]: 2026-01-22 14:49:00.127220147 +0000 UTC m=+0.073508294 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 14:49:00 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:49:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:00 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:49:00 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:00 compute-2 podman[261556]: 2026-01-22 14:49:00.251894395 +0000 UTC m=+0.198182572 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 14:49:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:00.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:00 compute-2 podman[261711]: 2026-01-22 14:49:00.927971914 +0000 UTC m=+0.051838554 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:49:00 compute-2 podman[261711]: 2026-01-22 14:49:00.939706312 +0000 UTC m=+0.063572922 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:49:01 compute-2 podman[261776]: 2026-01-22 14:49:01.14837932 +0000 UTC m=+0.058820248 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, version=2.2.4, build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1793, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 14:49:01 compute-2 podman[261776]: 2026-01-22 14:49:01.160743255 +0000 UTC m=+0.071184193 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 14:49:01 compute-2 ceph-mon[77081]: pgmap v2439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:01 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:01 compute-2 sudo[261459]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:01 compute-2 sudo[261809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:01 compute-2 sudo[261809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:01 compute-2 sudo[261809]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:01 compute-2 sudo[261834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:49:01 compute-2 sudo[261834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:01 compute-2 sudo[261834]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:01 compute-2 sudo[261859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:01 compute-2 sudo[261859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:01 compute-2 sudo[261859]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:01 compute-2 sudo[261884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:49:01 compute-2 sudo[261884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:01.679+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:02 compute-2 sudo[261884]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:02.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:02 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:49:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:49:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:02.687+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:03 compute-2 ceph-mon[77081]: pgmap v2440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:03 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:03 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:03.674+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:04.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:04 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:04.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:05 compute-2 ceph-mon[77081]: pgmap v2441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:05 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:05.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:05.716+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:06.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:06 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:06.712+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:07 compute-2 ceph-mon[77081]: pgmap v2442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:07 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:07.695+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:08.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:08 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:08 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:08.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:09 compute-2 ceph-mon[77081]: pgmap v2443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:09 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:09.726+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:10 compute-2 sudo[261945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:10 compute-2 sudo[261945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:10 compute-2 sudo[261945]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:10 compute-2 sudo[261970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:49:10 compute-2 sudo[261970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:10 compute-2 sudo[261970]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:10 compute-2 sudo[261995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:10 compute-2 sudo[261995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:10 compute-2 sudo[261995]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:10 compute-2 sudo[262020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:10 compute-2 sudo[262020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:10 compute-2 sudo[262020]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:49:10 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:10.727+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:11.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:11 compute-2 ceph-mon[77081]: pgmap v2444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:11 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:11.766+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:12 compute-2 podman[262046]: 2026-01-22 14:49:12.099124987 +0000 UTC m=+0.137880946 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 14:49:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:12.808+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:12 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:13.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:13.769+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:13 compute-2 ceph-mon[77081]: pgmap v2445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:13 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:13 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:14.775+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:14 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:14 compute-2 ceph-mon[77081]: pgmap v2446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:15.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:15.809+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:15 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:16.823+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:16 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:16 compute-2 ceph-mon[77081]: pgmap v2447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:16 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:49:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:49:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:17.786+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:17 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:49:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:49:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:49:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:49:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:18.794+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:18 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:49:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:49:18 compute-2 ceph-mon[77081]: pgmap v2448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:18 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:19.821+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:19 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:20.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:20.835+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:21 compute-2 ceph-mon[77081]: pgmap v2449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:21 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:21.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:21.839+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:22 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:22.849+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:23 compute-2 ceph-mon[77081]: pgmap v2450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:23.881+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:24 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:24 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:24 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:49:24.416 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:49:24 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:49:24.419 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:49:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:24.871+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:25 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:25 compute-2 ceph-mon[77081]: pgmap v2451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:25.844+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:26 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:26.811+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:27 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:27 compute-2 ceph-mon[77081]: pgmap v2452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:27 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:27.817+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 22 14:49:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:28.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 22 14:49:28 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:28 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:28.781+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:29 compute-2 ceph-mon[77081]: pgmap v2453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:29 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:29.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:29.744+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:30 compute-2 podman[262082]: 2026-01-22 14:49:30.047668424 +0000 UTC m=+0.100436482 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 14:49:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:30.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:30 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:30 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:49:30.421 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:49:30 compute-2 sudo[262102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:30 compute-2 sudo[262102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:30 compute-2 sudo[262102]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:30 compute-2 sudo[262127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:30 compute-2 sudo[262127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:30 compute-2 sudo[262127]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:30.742+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:31 compute-2 ceph-mon[77081]: pgmap v2454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:31 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:31.743+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:32 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:32.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:33 compute-2 ceph-mon[77081]: pgmap v2455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:33 compute-2 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 14:49:33 compute-2 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:33.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:33.695+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:34 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:34.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:35 compute-2 ceph-mon[77081]: pgmap v2456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:35 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:35.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:35.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:36 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:36.658+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:37 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:37 compute-2 ceph-mon[77081]: pgmap v2457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:37.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:37.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.166736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378166795, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1533, "num_deletes": 258, "total_data_size": 2844302, "memory_usage": 2887168, "flush_reason": "Manual Compaction"}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378181817, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1857518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72354, "largest_seqno": 73882, "table_properties": {"data_size": 1851434, "index_size": 3094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15427, "raw_average_key_size": 20, "raw_value_size": 1838085, "raw_average_value_size": 2460, "num_data_blocks": 134, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093281, "oldest_key_time": 1769093281, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 15126 microseconds, and 7624 cpu microseconds.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181868) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1857518 bytes OK
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181891) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.183392) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.183439) EVENT_LOG_v1 {"time_micros": 1769093378183429, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.183465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2836985, prev total WAL file size 2845729, number of live WAL files 2.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.184525) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1813KB)], [144(11MB)]
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378184593, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14018975, "oldest_snapshot_seqno": -1}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 12267 keys, 13864975 bytes, temperature: kUnknown
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378312908, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 13864975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13793906, "index_size": 39276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 332842, "raw_average_key_size": 27, "raw_value_size": 13581229, "raw_average_value_size": 1107, "num_data_blocks": 1477, "num_entries": 12267, "num_filter_entries": 12267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.313231) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 13864975 bytes
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314244) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.2 rd, 108.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 12798, records dropped: 531 output_compression: NoCompression
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314260) EVENT_LOG_v1 {"time_micros": 1769093378314252, "job": 92, "event": "compaction_finished", "compaction_time_micros": 128411, "compaction_time_cpu_micros": 69306, "output_level": 6, "num_output_files": 1, "total_output_size": 13864975, "num_input_records": 12798, "num_output_records": 12267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378314707, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378317019, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.184378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.318407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378318466, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28768, "flush_reason": "Manual Compaction"}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378320267, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 13847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73884, "largest_seqno": 74138, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 1903 microseconds, and 756 cpu microseconds.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.320301) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 13847 bytes OK
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.320333) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.321722) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.321733) EVENT_LOG_v1 {"time_micros": 1769093378321729, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.321748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.322121) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303037' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(13KB)], [147(13MB)]
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378322150, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 13878822, "oldest_snapshot_seqno": -1}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 12018 keys, 10006662 bytes, temperature: kUnknown
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378375568, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 10006662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9942184, "index_size": 33325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 327860, "raw_average_key_size": 27, "raw_value_size": 9738722, "raw_average_value_size": 810, "num_data_blocks": 1228, "num_entries": 12018, "num_filter_entries": 12018, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.375912) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10006662 bytes
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.377428) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.9 rd, 186.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(1725.0) write-amplify(722.7) OK, records in: 12522, records dropped: 504 output_compression: NoCompression
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.377444) EVENT_LOG_v1 {"time_micros": 1769093378377436, "job": 94, "event": "compaction_finished", "compaction_time_micros": 53601, "compaction_time_cpu_micros": 25979, "output_level": 6, "num_output_files": 1, "total_output_size": 10006662, "num_input_records": 12522, "num_output_records": 12018, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378377881, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378380696, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.322046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:49:38 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:38 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:38.578+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:39.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:39.535+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:39 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:39 compute-2 ceph-mon[77081]: pgmap v2458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:40.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:40 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:41.553+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:41 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:41 compute-2 ceph-mon[77081]: pgmap v2459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:42.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:42.548+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:42 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:42 compute-2 ceph-mon[77081]: pgmap v2460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:43 compute-2 podman[262159]: 2026-01-22 14:49:43.123599407 +0000 UTC m=+0.174180274 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 14:49:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:43.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:43.580+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:43 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:43 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:44.620+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:44 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:44 compute-2 ceph-mon[77081]: pgmap v2461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:45.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:45 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:46.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:46.629+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:46 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:46 compute-2 ceph-mon[77081]: pgmap v2462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:49:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:49:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:49:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:49:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:49:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:49:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:47.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:47.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:47 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:48.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:48 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:48 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:48 compute-2 ceph-mon[77081]: pgmap v2463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:49.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:49.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:49 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:49 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:50.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:50.723+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:50 compute-2 sudo[262189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:50 compute-2 sudo[262189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:50 compute-2 sudo[262189]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:50 compute-2 ceph-mon[77081]: pgmap v2464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:50 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:50 compute-2 sudo[262215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:49:50 compute-2 sudo[262215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:49:50 compute-2 sudo[262215]: pam_unix(sudo:session): session closed for user root
Jan 22 14:49:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:51.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:51.687+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:51 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:52.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:52.688+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:52 compute-2 ceph-mon[77081]: pgmap v2465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:52 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:53.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:53.658+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:53 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:53 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:54.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:54 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:54 compute-2 ceph-mon[77081]: pgmap v2466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:55.646+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:55 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:49:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:56.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:49:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:56.646+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:56 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:56 compute-2 ceph-mon[77081]: pgmap v2467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:57.654+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:58 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:49:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:49:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:58.663+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:59 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:49:59 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:49:59 compute-2 ceph-mon[77081]: pgmap v2468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:49:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:49:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:49:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:49:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:59.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:49:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:59.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:49:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:00 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 14:50:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 14:50:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:00.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:01 compute-2 podman[262245]: 2026-01-22 14:50:01.025891015 +0000 UTC m=+0.072323198 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 14:50:01 compute-2 ceph-mon[77081]: pgmap v2469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:01 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:01.652+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:02 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:02.684+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:03 compute-2 ceph-mon[77081]: pgmap v2470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:03 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:50:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 14:50:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:03.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 14:50:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:03.723+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:04 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:04 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:04.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:04 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:04.670 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:50:04 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:04.672 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:50:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:04.727+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:05 compute-2 ceph-mon[77081]: pgmap v2471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:05 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:05.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:05.722+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:06 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:06.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:06.714+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:07 compute-2 ceph-mon[77081]: pgmap v2472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:07 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:07.709+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:08 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:08 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:08.754+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:09 compute-2 ceph-mon[77081]: pgmap v2473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:09 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:09.674 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:50:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:09.729+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:10.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:10 compute-2 sudo[262268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:10 compute-2 sudo[262268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-2 sudo[262268]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:10 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:10 compute-2 sudo[262293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:50:10 compute-2 sudo[262293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-2 sudo[262293]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:10 compute-2 sudo[262318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:10 compute-2 sudo[262318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-2 sudo[262318]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:10 compute-2 sudo[262343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:50:10 compute-2 sudo[262343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:10.713+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:11 compute-2 sudo[262388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:11 compute-2 sudo[262388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:11 compute-2 sudo[262388]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:11 compute-2 sudo[262343]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:11 compute-2 sudo[262425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:11 compute-2 sudo[262425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:11 compute-2 sudo[262425]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:11 compute-2 ceph-mon[77081]: pgmap v2474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:11 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:50:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:50:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:50:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:50:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:50:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:50:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:11.716+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:12 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:12.725+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:13 compute-2 ceph-mon[77081]: pgmap v2475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:13 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:13 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:13.758+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:14 compute-2 podman[262451]: 2026-01-22 14:50:14.047252371 +0000 UTC m=+0.106508696 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:50:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:14.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:14 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:14.712+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:15 compute-2 ceph-mon[77081]: pgmap v2476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:15 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:15.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:16.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:16 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:16.666+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:17 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:17 compute-2 ceph-mon[77081]: pgmap v2477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:17.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:17.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:18.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:18 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:50:18 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:50:18 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:18 compute-2 sudo[262482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:18 compute-2 sudo[262482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:18 compute-2 sudo[262482]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:18.601+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:18 compute-2 sudo[262507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:50:18 compute-2 sudo[262507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:18 compute-2 sudo[262507]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/369554208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:50:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/369554208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:50:19 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:19 compute-2 ceph-mon[77081]: pgmap v2478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:19.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:19.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:20.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:20.580+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:20 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:20 compute-2 ceph-mon[77081]: pgmap v2479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:21.623+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:21 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:22.640+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:22 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:22 compute-2 ceph-mon[77081]: pgmap v2480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:23.595+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:23 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:23 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:24.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:24 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:24 compute-2 ceph-mon[77081]: pgmap v2481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:25.599+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:25 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:25 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:26.577+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:26 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:26 compute-2 ceph-mon[77081]: pgmap v2482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:27.537+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:27.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:27 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:28.548+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:28 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:28 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:28 compute-2 ceph-mon[77081]: pgmap v2483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:29.526+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:50:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:50:29 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:30.527+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:30 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:30 compute-2 ceph-mon[77081]: pgmap v2484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:31 compute-2 sudo[262539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:31 compute-2 sudo[262539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:31 compute-2 sudo[262539]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:31 compute-2 podman[262563]: 2026-01-22 14:50:31.382214402 +0000 UTC m=+0.059380619 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 22 14:50:31 compute-2 sudo[262570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:31 compute-2 sudo[262570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:31 compute-2 sudo[262570]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:31.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:31.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:31 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:32.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:33 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:33 compute-2 ceph-mon[77081]: pgmap v2485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:33.548+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:34 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:34 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:34.527+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:35 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:35 compute-2 ceph-mon[77081]: pgmap v2486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:35.521+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:36 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:36.477+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:37 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:37 compute-2 ceph-mon[77081]: pgmap v2487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:50:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:37.454+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:38 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:38.430+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:39 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:39 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:39 compute-2 ceph-mon[77081]: pgmap v2488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 22 14:50:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:39.417+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:39 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 22 14:50:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:40.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 22 14:50:40 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:40.393+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:40 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:41 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:41 compute-2 ceph-mon[77081]: pgmap v2489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 710 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 22 14:50:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:41.409+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:41 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:42 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:42.377+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:42 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:43 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:43 compute-2 ceph-mon[77081]: pgmap v2490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 710 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 22 14:50:43 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:43.346+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:43 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:44 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:44.394+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:44 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:44 compute-2 podman[262617]: 2026-01-22 14:50:44.535481087 +0000 UTC m=+0.106437054 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:50:45 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:45 compute-2 ceph-mon[77081]: pgmap v2491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 39 op/s
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.330165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445330193, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1157, "num_deletes": 251, "total_data_size": 1916708, "memory_usage": 1952800, "flush_reason": "Manual Compaction"}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445339657, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 1258029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74143, "largest_seqno": 75295, "table_properties": {"data_size": 1253303, "index_size": 2121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12396, "raw_average_key_size": 20, "raw_value_size": 1242935, "raw_average_value_size": 2068, "num_data_blocks": 92, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 9533 microseconds, and 3729 cpu microseconds.
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.339697) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 1258029 bytes OK
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.339716) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341293) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341303) EVENT_LOG_v1 {"time_micros": 1769093445341300, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341332) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1910974, prev total WAL file size 1910974, number of live WAL files 2.
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341945) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(1228KB)], [150(9772KB)]
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445341973, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 11264691, "oldest_snapshot_seqno": -1}
Jan 22 14:50:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:45.364+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:45 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 12104 keys, 9648597 bytes, temperature: kUnknown
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445394475, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 9648597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584029, "index_size": 33223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 330794, "raw_average_key_size": 27, "raw_value_size": 9379455, "raw_average_value_size": 774, "num_data_blocks": 1219, "num_entries": 12104, "num_filter_entries": 12104, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.394661) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9648597 bytes
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.395732) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.3 rd, 183.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 12619, records dropped: 515 output_compression: NoCompression
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.395746) EVENT_LOG_v1 {"time_micros": 1769093445395739, "job": 96, "event": "compaction_finished", "compaction_time_micros": 52559, "compaction_time_cpu_micros": 24685, "output_level": 6, "num_output_files": 1, "total_output_size": 9648597, "num_input_records": 12619, "num_output_records": 12104, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445395999, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445397531, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:50:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:45.449 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:50:45 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:45.449 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:50:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:45.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:46 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:46.359+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:46 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:50:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:50:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:50:47 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:47 compute-2 ceph-mon[77081]: pgmap v2492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 22 14:50:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:47.406+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:47 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:47.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:48 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:48 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:48.449+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:48 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:49 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:49 compute-2 ceph-mon[77081]: pgmap v2493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 22 14:50:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:49.474+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:49 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:49.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:50 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:50.486+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:50 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:51 compute-2 sudo[262649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:51 compute-2 sudo[262649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:51 compute-2 sudo[262649]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:51.536+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:51 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:51 compute-2 sudo[262674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:50:51 compute-2 sudo[262674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:50:51 compute-2 sudo[262674]: pam_unix(sudo:session): session closed for user root
Jan 22 14:50:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:51.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:51 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:51 compute-2 ceph-mon[77081]: pgmap v2494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 22 14:50:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:52.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:52.516+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:52 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:52 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:53.509+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:53 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:53.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:53 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:53 compute-2 ceph-mon[77081]: pgmap v2495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 282 KiB/s wr, 17 op/s
Jan 22 14:50:53 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:54.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:54.492+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:54 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:54 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:55.448+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:55 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:50:55.451 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:50:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:55.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:55 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:55 compute-2 ceph-mon[77081]: pgmap v2496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 282 KiB/s wr, 17 op/s
Jan 22 14:50:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:56.489+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:56 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:56 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:56 compute-2 ceph-mon[77081]: pgmap v2497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 43 KiB/s wr, 3 op/s
Jan 22 14:50:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:57.443+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:57 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:50:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:50:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:57.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:50:57 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:50:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:58.484+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:58 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:50:58 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:50:58 compute-2 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:50:58 compute-2 ceph-mon[77081]: pgmap v2498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:50:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:50:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:59.531+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:59 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:50:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:50:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:50:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:50:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:50:59 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:00.549+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:00 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:00 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:00 compute-2 ceph-mon[77081]: pgmap v2499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:01.510+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:01 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:01 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:02 compute-2 podman[262704]: 2026-01-22 14:51:02.027875536 +0000 UTC m=+0.088988671 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:51:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:02.552+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:02 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:02 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:02 compute-2 ceph-mon[77081]: pgmap v2500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:03.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:03 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:03 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:03 compute-2 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:04.569+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:04 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:04 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:04 compute-2 ceph-mon[77081]: pgmap v2501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:05.545+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:05 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:05 compute-2 sshd-session[262725]: Invalid user ubuntu from 45.148.10.240 port 47572
Jan 22 14:51:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:05 compute-2 sshd-session[262725]: Connection closed by invalid user ubuntu 45.148.10.240 port 47572 [preauth]
Jan 22 14:51:05 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:05 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:06.503+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:06 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:06 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:06 compute-2 ceph-mon[77081]: pgmap v2502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:07.454+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:07 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:51:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:51:07 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 14:51:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:08.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:08 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:08.422+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:08 compute-2 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:08 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:08 compute-2 ceph-mon[77081]: pgmap v2503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:09 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:09.397+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:09 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:10.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:10 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:10.394+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:10 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:10 compute-2 ceph-mon[77081]: pgmap v2504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:11 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:11.345+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:11 compute-2 sudo[262730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:11 compute-2 sudo[262730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:11 compute-2 sudo[262730]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:11 compute-2 sudo[262755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:11 compute-2 sudo[262755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:11 compute-2 sudo[262755]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:12 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:12 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:12.318+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:13 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:13 compute-2 ceph-mon[77081]: pgmap v2505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:13 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:13.311+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:14 compute-2 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:14 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:14 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:14.262+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:15 compute-2 podman[262782]: 2026-01-22 14:51:15.102987478 +0000 UTC m=+0.147197879 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:51:15 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:15 compute-2 ceph-mon[77081]: pgmap v2506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:15 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:15.276+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:16 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:16 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:16.281+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:17 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:17 compute-2 ceph-mon[77081]: pgmap v2507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:17 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:17.298+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:51:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:51:18 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:18 compute-2 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:18 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:18.255+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:51:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:51:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:18 compute-2 sudo[262810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:18 compute-2 sudo[262810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:18 compute-2 sudo[262810]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:18 compute-2 sudo[262835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:51:18 compute-2 sudo[262835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:18 compute-2 sudo[262835]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:18 compute-2 sudo[262861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:18 compute-2 sudo[262861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:18 compute-2 sudo[262861]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:18 compute-2 sudo[262886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:51:18 compute-2 sudo[262886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:19 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:19 compute-2 ceph-mon[77081]: pgmap v2508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:19 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:19.278+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:19 compute-2 sudo[262886]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:20.244+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:20 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:20 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:51:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:51:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:51:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:51:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:51:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:51:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:21.238+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:21 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:21 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:21 compute-2 ceph-mon[77081]: pgmap v2509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:21.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:22.275+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:22 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:22 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:23 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:23 compute-2 ceph-mon[77081]: pgmap v2510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:23 compute-2 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:23.308+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:23 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:23.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:51:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:51:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:24.315+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:24 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:24 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:25.275+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:25 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:25 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:25 compute-2 ceph-mon[77081]: pgmap v2511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:25.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:26.296+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:26 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:26 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:26 compute-2 sudo[262944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:26 compute-2 sudo[262944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:26 compute-2 sudo[262944]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:26 compute-2 sudo[262970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:51:26 compute-2 sudo[262970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:26 compute-2 sudo[262970]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:51:27.279 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:51:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:51:27.280 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:51:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:27.301+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:27 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:27.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:27 compute-2 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 14:51:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:51:27 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:51:27 compute-2 ceph-mon[77081]: pgmap v2512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:28.298+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:28 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:28 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:28 compute-2 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:28 compute-2 ceph-mon[77081]: pgmap v2513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:29.285+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:29 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:51:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:29.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:51:29 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-2 ovn_controller[133156]: 2026-01-22T14:51:30Z|00078|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 14:51:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:30.268+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:30 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:30 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:30 compute-2 ceph-mon[77081]: pgmap v2514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:31.294+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:31 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:51:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:31.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:51:31 compute-2 sudo[262997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:31 compute-2 sudo[262997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:31 compute-2 sudo[262997]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:31 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 14:51:31 compute-2 sudo[263022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:31 compute-2 sudo[263022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:31 compute-2 sudo[263022]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:32.279+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:32 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:51:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:32 compute-2 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:51:32 compute-2 ceph-mon[77081]: pgmap v2515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:33 compute-2 podman[263048]: 2026-01-22 14:51:33.001330877 +0000 UTC m=+0.055580893 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 22 14:51:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:33.240+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:33 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:51:33 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:51:33 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:33.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:33 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 4482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:33 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:33 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:34.217+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:34 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:34 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:51:34.281 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:51:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:34 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:34 compute-2 ceph-mon[77081]: pgmap v2516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 2 op/s
Jan 22 14:51:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:51:35 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:51:35 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:35.250+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:35 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:35.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:35 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:35 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:36.240+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:36 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:51:36 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:51:36 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:36 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:51:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:51:36 compute-2 ceph-mon[77081]: pgmap v2517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 726 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 22 14:51:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:37.252+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:37 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:38 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:38.202+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:38 compute-2 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 22 14:51:39 compute-2 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:39 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:39 compute-2 ceph-mon[77081]: pgmap v2518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 714 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Jan 22 14:51:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:39.177+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:40 compute-2 ceph-mon[77081]: osdmap e162: 3 total, 3 up, 3 in
Jan 22 14:51:40 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:40.135+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:41.091+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:41 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:41 compute-2 ceph-mon[77081]: pgmap v2520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Jan 22 14:51:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:42.053+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:42 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:43.004+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:43 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:43 compute-2 ceph-mon[77081]: pgmap v2521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Jan 22 14:51:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:51:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:51:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:44.020+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:44 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:44 compute-2 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:45.007+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:45 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:45 compute-2 ceph-mon[77081]: pgmap v2522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.6 KiB/s wr, 65 op/s
Jan 22 14:51:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:45.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:46 compute-2 podman[263072]: 2026-01-22 14:51:46.024167431 +0000 UTC m=+0.084627646 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 14:51:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:46.042+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:46 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:51:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:51:46 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:47.069+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:51:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:51:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:51:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:51:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:51:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:51:47 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:48.053+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:48 compute-2 ceph-mon[77081]: pgmap v2523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 22 14:51:48 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:48 compute-2 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:49.068+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:49 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:49.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:50.099+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:50 compute-2 ceph-mon[77081]: pgmap v2524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 409 B/s wr, 18 op/s
Jan 22 14:51:50 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:51.147+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:51.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:52 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:52 compute-2 ceph-mon[77081]: pgmap v2525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 351 B/s wr, 15 op/s
Jan 22 14:51:52 compute-2 sudo[263102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:52 compute-2 sudo[263102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:52 compute-2 sudo[263102]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:52.158+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:52 compute-2 sudo[263127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:51:52 compute-2 sudo[263127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:51:52 compute-2 sudo[263127]: pam_unix(sudo:session): session closed for user root
Jan 22 14:51:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:52.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:53 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:53 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:53.110+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:53.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:54.064+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:54 compute-2 ceph-mon[77081]: pgmap v2526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 14:51:54 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:54 compute-2 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:51:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:51:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:55.027+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:55 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:51:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:56.055+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:56 compute-2 ceph-mon[77081]: pgmap v2527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 14:51:56 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:57.026+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:57 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:57 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:58.075+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:58 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:51:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:51:59 compute-2 ceph-mon[77081]: pgmap v2528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:51:59 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:59 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:59 compute-2 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:51:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:59.111+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:59 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:51:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:51:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:51:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:51:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:51:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:00.095+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:00 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:00 compute-2 ceph-mon[77081]: pgmap v2529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:00 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:01.071+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:01 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:01.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:02.023+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:02 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:02 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:02 compute-2 ceph-mon[77081]: pgmap v2530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:03.059+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:03 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:03 compute-2 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:03.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:03 compute-2 podman[263158]: 2026-01-22 14:52:03.992349556 +0000 UTC m=+0.048792294 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 14:52:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:04.106+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:04 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:04.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:04 compute-2 ceph-mon[77081]: pgmap v2531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:04 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:04 compute-2 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:05.082+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:05 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:05.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:06.045+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:06 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:06 compute-2 ceph-mon[77081]: pgmap v2532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:06.996+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:07 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:07 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:52:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:07.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:52:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:07.972+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:07 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:52:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:52:08 compute-2 ceph-mon[77081]: pgmap v2533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:08 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:08 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:08.990+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:09 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:09.984+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:09 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:10.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:10 compute-2 ceph-mon[77081]: pgmap v2534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:10 compute-2 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 14:52:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:10.958+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:10 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:11 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:11.062 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:52:11 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:11.063 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:52:11 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:52:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:52:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:12.001+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:12 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:12 compute-2 sudo[263182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:12 compute-2 sudo[263182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:12 compute-2 sudo[263182]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:12 compute-2 sudo[263207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:12 compute-2 sudo[263207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:12 compute-2 sudo[263207]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:12 compute-2 ceph-mon[77081]: pgmap v2535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:12 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:12 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:13.016+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:13 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:13 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:13.065 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:52:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:52:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:52:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:14.047+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:14 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:52:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:52:14 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:14 compute-2 ceph-mon[77081]: pgmap v2536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:14 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:15.017+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:15 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:15 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.006000190s ======
Jan 22 14:52:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000190s
Jan 22 14:52:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:16.003+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:16 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:16.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:16 compute-2 ceph-mon[77081]: pgmap v2537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:16 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:17.020+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:17 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:17 compute-2 podman[263235]: 2026-01-22 14:52:17.055356838 +0000 UTC m=+0.110088550 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:52:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:17.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:18.000+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:18 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:18 compute-2 ceph-mon[77081]: pgmap v2538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:18 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:18.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:52:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:52:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:52:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:52:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:18.964+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:19 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:19 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:52:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:52:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:52:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:52:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:19.968+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:19 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:20.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:20.992+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:20 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:21 compute-2 ceph-mon[77081]: pgmap v2539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:21 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:21 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:21 compute-2 ceph-mon[77081]: pgmap v2540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:21 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:21.956+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:21 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:22.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:22.984+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:22 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:23 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:23.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:23.965+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:23 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:24 compute-2 ceph-mon[77081]: pgmap v2541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:24 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:24 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:25.006+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:25 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:25.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:26.056+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:26 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:26 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:52:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:52:26 compute-2 sudo[263267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:26 compute-2 sudo[263267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:26 compute-2 sudo[263267]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:27.024+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:27 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:27 compute-2 sudo[263292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:52:27 compute-2 sudo[263292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:27 compute-2 sudo[263292]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-2 sudo[263317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:27 compute-2 sudo[263317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:27 compute-2 sudo[263317]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-2 sudo[263342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:52:27 compute-2 sudo[263342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:27 compute-2 ceph-mon[77081]: pgmap v2542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:27 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:27 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 14:52:27 compute-2 sudo[263342]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:28.035+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:28 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:52:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:52:28 compute-2 ceph-mon[77081]: pgmap v2543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:28 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:52:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:52:28 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:29.012+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:29 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:29 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:30.058+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:30 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:52:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:30.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:52:30 compute-2 ceph-mon[77081]: pgmap v2544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:30 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:31.024+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:31.978+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:32 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:32 compute-2 ceph-mon[77081]: pgmap v2545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:32 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:32.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:32 compute-2 sudo[263400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:32 compute-2 sudo[263400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:32 compute-2 sudo[263400]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:32 compute-2 sudo[263425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:32 compute-2 sudo[263425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:32 compute-2 sudo[263425]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:33.019+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:33 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:33 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:34.060+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:34 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 14:52:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 14:52:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:34 compute-2 ceph-mon[77081]: pgmap v2546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:34 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:34 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:35 compute-2 podman[263452]: 2026-01-22 14:52:35.00381899 +0000 UTC m=+0.067467684 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 14:52:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:35.065+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:35 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:35 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:36.055+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:36 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:52:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:52:37 compute-2 ceph-mon[77081]: pgmap v2547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:37 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:37.066+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:37 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:38.051+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:38 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:38 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:38 compute-2 ceph-mon[77081]: pgmap v2548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:38 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:52:38 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:52:38 compute-2 sudo[263473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:38 compute-2 sudo[263473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:38 compute-2 sudo[263473]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:38 compute-2 sudo[263498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:52:38 compute-2 sudo[263498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:38 compute-2 sudo[263498]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:38.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.463348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558463369, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1704, "num_deletes": 250, "total_data_size": 3211790, "memory_usage": 3277864, "flush_reason": "Manual Compaction"}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558483894, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2099552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75300, "largest_seqno": 76999, "table_properties": {"data_size": 2092912, "index_size": 3521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16257, "raw_average_key_size": 19, "raw_value_size": 2078219, "raw_average_value_size": 2546, "num_data_blocks": 154, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093446, "oldest_key_time": 1769093446, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 20609 microseconds, and 4515 cpu microseconds.
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483955) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2099552 bytes OK
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483970) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.486047) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.486074) EVENT_LOG_v1 {"time_micros": 1769093558486066, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.486097) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3203764, prev total WAL file size 3203764, number of live WAL files 2.
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.487972) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2050KB)], [153(9422KB)]
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558488050, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 11748149, "oldest_snapshot_seqno": -1}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 12403 keys, 10657121 bytes, temperature: kUnknown
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558562695, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 10657121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10590143, "index_size": 34865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 339520, "raw_average_key_size": 27, "raw_value_size": 10379451, "raw_average_value_size": 836, "num_data_blocks": 1270, "num_entries": 12403, "num_filter_entries": 12403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.562924) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10657121 bytes
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.564304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.3 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(10.7) write-amplify(5.1) OK, records in: 12920, records dropped: 517 output_compression: NoCompression
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.564333) EVENT_LOG_v1 {"time_micros": 1769093558564326, "job": 98, "event": "compaction_finished", "compaction_time_micros": 74701, "compaction_time_cpu_micros": 29292, "output_level": 6, "num_output_files": 1, "total_output_size": 10657121, "num_input_records": 12920, "num_output_records": 12403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558564746, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558566299, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.487742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:52:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:39.091+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:39 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:39 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:40.076+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 14:52:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 14:52:41 compute-2 ceph-mon[77081]: pgmap v2549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:41 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:41.060+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:42.068+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:42 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:42 compute-2 ceph-mon[77081]: pgmap v2550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:42 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:43.080+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:43 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:44.104+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:44 compute-2 ceph-mon[77081]: pgmap v2551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:44 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:44 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:44.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:45 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:45.135+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:46 compute-2 ceph-mon[77081]: pgmap v2552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:46 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:46 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:46.168+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:47 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:47.175+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:52:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:52:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:52:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:48 compute-2 podman[263528]: 2026-01-22 14:52:48.047648 +0000 UTC m=+0.108141070 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 14:52:48 compute-2 ceph-mon[77081]: pgmap v2553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:48 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:48.217+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:48.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:49 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:49 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:49.177+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:52:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:52:50 compute-2 ceph-mon[77081]: pgmap v2554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:50 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:50.219+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:51 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:51.219+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:51.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:52 compute-2 ceph-mon[77081]: pgmap v2555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:52 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:52.267+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:52.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:52 compute-2 sudo[263556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:52 compute-2 sudo[263556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:52 compute-2 sudo[263556]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:52 compute-2 sudo[263581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:52:52 compute-2 sudo[263581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:52:52 compute-2 sudo[263581]: pam_unix(sudo:session): session closed for user root
Jan 22 14:52:53 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:53.317+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:53.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:53 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:53.974 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:52:53 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:53.974 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:52:54 compute-2 ceph-mon[77081]: pgmap v2556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:54 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:54 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:54.337+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:54.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:55 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:55.336+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:55.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:56 compute-2 ceph-mon[77081]: pgmap v2557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:56 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:56.324+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:52:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:52:57 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:52:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:57.358+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:57 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:58 compute-2 ceph-mon[77081]: pgmap v2558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:52:58 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:58.405+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:58 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:59 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:59 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:52:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:59.452+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:59 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:52:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:52:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:52:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:52:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:52:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:59.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:52:59 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:52:59.976 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:53:00 compute-2 ceph-mon[77081]: pgmap v2559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:00 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:00.434+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:00 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:01 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:01.399+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:01 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:02.384+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:02 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:02 compute-2 ceph-mon[77081]: pgmap v2560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:02 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:02.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:03 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:03.421+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:03.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:04 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:04.416+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:04 compute-2 ceph-mon[77081]: pgmap v2561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:04 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:04 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:05 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:05.419+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:05 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:05.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:05 compute-2 podman[263613]: 2026-01-22 14:53:05.9948085 +0000 UTC m=+0.056341326 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 14:53:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:06.374+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:06 compute-2 ceph-mon[77081]: pgmap v2562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:06 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:07 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:07.341+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:07 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:07.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:08 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:08.324+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:08 compute-2 ceph-mon[77081]: pgmap v2563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:08 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:08 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:09.275+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:09 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:09.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:09 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:09 compute-2 ceph-mon[77081]: pgmap v2564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:09 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:10.290+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:10 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:10.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:11 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:11.248+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:11 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:12 compute-2 ceph-mon[77081]: pgmap v2565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:12 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:12.292+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:12 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:12.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:12 compute-2 sudo[263635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:12 compute-2 sudo[263635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:12 compute-2 sudo[263635]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:12 compute-2 sudo[263660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:12 compute-2 sudo[263660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:12 compute-2 sudo[263660]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:13 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:13.267+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:13 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:14 compute-2 ceph-mon[77081]: pgmap v2566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:14 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:14 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:14.280+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:14 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:15.236+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:15 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:15 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:16.206+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:16 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:16 compute-2 ceph-mon[77081]: pgmap v2567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:16 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:17 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:17.249+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:18 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:18.260+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:53:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:53:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:53:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:53:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:18.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:19 compute-2 podman[263690]: 2026-01-22 14:53:19.035095995 +0000 UTC m=+0.092391712 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:53:19 compute-2 ceph-mon[77081]: pgmap v2568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:19 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:19 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:53:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:53:19 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:19 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:19.303+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:19.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:20 compute-2 ceph-mon[77081]: pgmap v2569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:20 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:20 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:20.272+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:21 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:21.232+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:21 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:21 compute-2 sshd-session[263717]: Invalid user ubuntu from 45.148.10.240 port 46762
Jan 22 14:53:21 compute-2 sshd-session[263717]: Connection closed by invalid user ubuntu 45.148.10.240 port 46762 [preauth]
Jan 22 14:53:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:22 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:22.273+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:22.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:22 compute-2 ceph-mon[77081]: pgmap v2570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:22 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:23 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:23.292+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:23 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:23 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:24 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:24.291+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:24.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:24 compute-2 ceph-mon[77081]: pgmap v2571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:24 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:25 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:25.302+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:25 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:26 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:26.302+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:26.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:26 compute-2 ceph-mon[77081]: pgmap v2572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:26 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:27 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:27.269+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:27 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:28 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:28.303+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:28 compute-2 ceph-mon[77081]: pgmap v2573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:28 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:28 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:29 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:29.305+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:29 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:29 compute-2 ceph-mon[77081]: pgmap v2574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:30 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:30.317+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:30.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:30 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:30 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:31.282+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:31 compute-2 ceph-mon[77081]: pgmap v2575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:31 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:32 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:32.267+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:32 compute-2 sudo[263725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:32 compute-2 sudo[263725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:32 compute-2 sudo[263725]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:33 compute-2 sudo[263750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:33 compute-2 sudo[263750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:33 compute-2 sudo[263750]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:33 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:33.236+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:33 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:53:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:33.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:53:34 compute-2 ceph-mon[77081]: pgmap v2576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:34 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:34 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:34.202+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:34 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:34.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:35 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:35.216+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:35 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:36.179+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:36 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:36 compute-2 ceph-mon[77081]: pgmap v2577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:36 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:53:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:36.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:53:36 compute-2 podman[263777]: 2026-01-22 14:53:36.992392654 +0000 UTC m=+0.049534275 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 14:53:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:37.142+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:37 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:37 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:38 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:38.129+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:38 compute-2 ceph-mon[77081]: pgmap v2578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:38 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.314801) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618314830, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1004, "num_deletes": 251, "total_data_size": 1621331, "memory_usage": 1651256, "flush_reason": "Manual Compaction"}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618323512, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1064004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77004, "largest_seqno": 78003, "table_properties": {"data_size": 1059795, "index_size": 1732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10912, "raw_average_key_size": 20, "raw_value_size": 1050737, "raw_average_value_size": 1953, "num_data_blocks": 75, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093559, "oldest_key_time": 1769093559, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 8768 microseconds, and 4033 cpu microseconds.
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323565) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1064004 bytes OK
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323583) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326528) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326577) EVENT_LOG_v1 {"time_micros": 1769093618326567, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326602) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1616248, prev total WAL file size 1616248, number of live WAL files 2.
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.327399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1039KB)], [156(10MB)]
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618327439, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 11721125, "oldest_snapshot_seqno": -1}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 12430 keys, 10129553 bytes, temperature: kUnknown
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618386578, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10129553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10062943, "index_size": 34433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 341154, "raw_average_key_size": 27, "raw_value_size": 9852158, "raw_average_value_size": 792, "num_data_blocks": 1246, "num_entries": 12430, "num_filter_entries": 12430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.386913) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10129553 bytes
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.389232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.6 rd, 170.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(20.5) write-amplify(9.5) OK, records in: 12941, records dropped: 511 output_compression: NoCompression
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.389267) EVENT_LOG_v1 {"time_micros": 1769093618389241, "job": 100, "event": "compaction_finished", "compaction_time_micros": 59311, "compaction_time_cpu_micros": 29278, "output_level": 6, "num_output_files": 1, "total_output_size": 10129553, "num_input_records": 12941, "num_output_records": 12430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618389909, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618392044, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.327297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:53:38 compute-2 sudo[263796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:38 compute-2 sudo[263796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-2 sudo[263796]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:38 compute-2 sudo[263821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:53:38 compute-2 sudo[263821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-2 sudo[263821]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:38 compute-2 sudo[263846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:38 compute-2 sudo[263846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-2 sudo[263846]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:38 compute-2 sudo[263871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:53:38 compute-2 sudo[263871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:38 compute-2 sudo[263871]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:39.124+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:39 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:39 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:53:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:53:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:53:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:53:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:53:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:53:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:40.160+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:40 compute-2 ceph-mon[77081]: pgmap v2579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:40 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:41.185+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:41 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:41.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:42.186+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:42.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:42 compute-2 ceph-mon[77081]: pgmap v2580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:42 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:43.221+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:53:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:53:44 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:44 compute-2 ceph-mon[77081]: pgmap v2581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:44 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:44 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:44.194+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:53:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:44.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:53:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:45 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:45.175+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:46 compute-2 ceph-mon[77081]: pgmap v2582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:46 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:46 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:46.212+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:53:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:46.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:53:46 compute-2 sudo[263932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:46 compute-2 sudo[263932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:46 compute-2 sudo[263932]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:46 compute-2 sudo[263957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:53:46 compute-2 sudo[263957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:46 compute-2 sudo[263957]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:47.228+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:53:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:53:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:53:47.230 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:53:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:53:47.230 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:53:47 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:53:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:53:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:48.257+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:53:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:48.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:53:48 compute-2 ceph-mon[77081]: pgmap v2583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:48 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:48 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:49.262+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:49 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:53:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:53:50 compute-2 podman[263983]: 2026-01-22 14:53:50.020490437 +0000 UTC m=+0.078157606 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 14:53:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:53:50.104 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:53:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:53:50.105 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:53:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:50.279+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:53:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:53:50 compute-2 ceph-mon[77081]: pgmap v2584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:50 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:51.316+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:51.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:52 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:52 compute-2 ceph-mon[77081]: pgmap v2585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:52 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:52.344+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:52.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:53 compute-2 sudo[264012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:53 compute-2 sudo[264012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:53 compute-2 sudo[264012]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:53 compute-2 sudo[264037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:53:53 compute-2 sudo[264037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:53:53 compute-2 sudo[264037]: pam_unix(sudo:session): session closed for user root
Jan 22 14:53:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:53.341+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:53 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:53.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:54.345+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:54.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:54 compute-2 ceph-mon[77081]: pgmap v2586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:54 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:54 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:55.357+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:55 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:53:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:53:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:56.351+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:53:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:53:56 compute-2 ceph-mon[77081]: pgmap v2587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:56 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:57 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:57.337+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:57 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:53:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:53:58 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:58.304+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:53:59 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:53:59.107 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:53:59 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:59.259+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:53:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:59 compute-2 ceph-mon[77081]: pgmap v2588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:53:59 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:53:59 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:53:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:53:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:53:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:53:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:00 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:00.236+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:00 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:00 compute-2 ceph-mon[77081]: pgmap v2589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:00 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:00.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:01 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:01.258+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:01 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:02 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:02.233+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:03 compute-2 ceph-mon[77081]: pgmap v2590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:03 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:03.225+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:03.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:04 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:04.210+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:04 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:04 compute-2 ceph-mon[77081]: pgmap v2591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:04 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:04 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:05 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:05.179+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:05 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:06.215+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:06 compute-2 ceph-mon[77081]: pgmap v2592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:06 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:07 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:07.178+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:07 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:07.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:08 compute-2 podman[264069]: 2026-01-22 14:54:08.025248231 +0000 UTC m=+0.076429806 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:54:08 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:08.159+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:09 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:09.165+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:09 compute-2 ceph-mon[77081]: pgmap v2593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:09 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:09 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:10 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:10.151+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:10 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:10 compute-2 ceph-mon[77081]: pgmap v2594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:10 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:10.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:11 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:11.159+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:11 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:12 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:12.162+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:12.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:12 compute-2 ceph-mon[77081]: pgmap v2595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:12 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:13 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:13.206+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:13 compute-2 sudo[264092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:13 compute-2 sudo[264092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:13 compute-2 sudo[264092]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:13 compute-2 sudo[264117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:13 compute-2 sudo[264117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:13 compute-2 sudo[264117]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.561392) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653561466, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 696, "num_deletes": 256, "total_data_size": 999630, "memory_usage": 1013048, "flush_reason": "Manual Compaction"}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653570694, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 656855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78008, "largest_seqno": 78699, "table_properties": {"data_size": 653574, "index_size": 1124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8446, "raw_average_key_size": 19, "raw_value_size": 646531, "raw_average_value_size": 1493, "num_data_blocks": 48, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093619, "oldest_key_time": 1769093619, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 9355 microseconds, and 4706 cpu microseconds.
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570749) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 656855 bytes OK
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570775) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572371) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572390) EVENT_LOG_v1 {"time_micros": 1769093653572384, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572415) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 995762, prev total WAL file size 995762, number of live WAL files 2.
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end)
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(641KB)], [159(9892KB)]
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653573032, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 10786408, "oldest_snapshot_seqno": -1}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 12339 keys, 10642966 bytes, temperature: kUnknown
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653639752, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 10642966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10576177, "index_size": 34868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 340455, "raw_average_key_size": 27, "raw_value_size": 10366147, "raw_average_value_size": 840, "num_data_blocks": 1261, "num_entries": 12339, "num_filter_entries": 12339, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.640205) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10642966 bytes
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.641674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 158.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(32.6) write-amplify(16.2) OK, records in: 12863, records dropped: 524 output_compression: NoCompression
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.641695) EVENT_LOG_v1 {"time_micros": 1769093653641687, "job": 102, "event": "compaction_finished", "compaction_time_micros": 66998, "compaction_time_cpu_micros": 40459, "output_level": 6, "num_output_files": 1, "total_output_size": 10642966, "num_input_records": 12863, "num_output_records": 12339, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653642389, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653644750, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:54:13 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:13 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:14 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:14.255+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:14 compute-2 ceph-mon[77081]: pgmap v2596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:14 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:15 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:15.252+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:15 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:15 compute-2 ceph-mon[77081]: pgmap v2597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:16 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:16.286+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:17 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:17.293+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:17 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:17 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:18.252+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:18 compute-2 ceph-mon[77081]: pgmap v2598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:18 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:19 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:19.259+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:19 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:54:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4136729720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:54:19 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:20 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:20.290+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:21 compute-2 podman[264146]: 2026-01-22 14:54:21.07586293 +0000 UTC m=+0.129382930 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 14:54:21 compute-2 ceph-mon[77081]: pgmap v2599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:21 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:21.290+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:21 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:22 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:22 compute-2 ceph-mon[77081]: pgmap v2600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:22 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:22.333+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:22 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:22.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:23.317+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:23 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:23 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:24.348+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:24 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:24.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:24 compute-2 ceph-mon[77081]: pgmap v2601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:24 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:24 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:25.377+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:25 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:25 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 14:54:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:25.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:26.416+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:26 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:27 compute-2 ceph-mon[77081]: pgmap v2602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:27 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:27.391+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:27 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:28 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:28 compute-2 ceph-mon[77081]: pgmap v2603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:28 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:28.423+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:28 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:29 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:29 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:29.382+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:29 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:54:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:29.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:54:30 compute-2 ceph-mon[77081]: pgmap v2604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:30 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:30.384+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:30 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:31 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:31.425+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:31.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:32 compute-2 ceph-mon[77081]: pgmap v2605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:32 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:32.446+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:32 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:33.418+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:33 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:33 compute-2 sudo[264178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:33 compute-2 sudo[264178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:33 compute-2 sudo[264178]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:33 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:33 compute-2 sudo[264203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:33 compute-2 sudo[264203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:33 compute-2 sudo[264203]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:34.395+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:34 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:34.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:34 compute-2 ceph-mon[77081]: pgmap v2606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:34 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:34 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:35.426+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:35 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:35 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:36.432+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:36 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:36 compute-2 ceph-mon[77081]: pgmap v2607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:54:36 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:37.477+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:37 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:37 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:37.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:38.441+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:38 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:38 compute-2 ceph-mon[77081]: pgmap v2608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 14:54:38 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:38 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:39 compute-2 podman[264231]: 2026-01-22 14:54:38.999199637 +0000 UTC m=+0.054045116 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:54:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:39.418+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:39 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:40.395+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:40 compute-2 ceph-mon[77081]: pgmap v2609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 22 14:54:40 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:41.415+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:42 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 14:54:42 compute-2 ceph-mon[77081]: pgmap v2610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 18 op/s
Jan 22 14:54:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:42.392+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:43 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:43 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:43.428+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:43.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:44 compute-2 ceph-mon[77081]: pgmap v2611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 680 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 18 op/s
Jan 22 14:54:44 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:44 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:44.408+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:44.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:45.389+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:45 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:45.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:54:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:46.416+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:46 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:46 compute-2 ceph-mon[77081]: pgmap v2612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 691 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 394 KiB/s wr, 22 op/s
Jan 22 14:54:46 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:46.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:47 compute-2 sudo[264255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:47 compute-2 sudo[264255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-2 sudo[264255]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-2 sudo[264280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:54:47 compute-2 sudo[264280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-2 sudo[264280]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:54:47.231 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:54:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:54:47.232 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:54:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:54:47.232 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:54:47 compute-2 sudo[264305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:47 compute-2 sudo[264305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-2 sudo[264305]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-2 sudo[264330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:54:47 compute-2 sudo[264330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:47.439+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:47 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:47 compute-2 sudo[264330]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:47.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:48.399+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:48.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:48 compute-2 ceph-mon[77081]: pgmap v2613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 22 14:54:48 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:48 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:49.439+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:49 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:49 compute-2 ceph-mon[77081]: pgmap v2614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 701 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Jan 22 14:54:49 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:49 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 22 14:54:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:49.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 22 14:54:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:50.448+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:50.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:50 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:54:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:54:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:54:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:54:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:54:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:51.473+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:52 compute-2 podman[264389]: 2026-01-22 14:54:52.068434737 +0000 UTC m=+0.131493420 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 22 14:54:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:52.516+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:52 compute-2 ceph-mon[77081]: pgmap v2615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 22 14:54:52 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:53.481+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:53 compute-2 sudo[264417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:53 compute-2 sudo[264417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:53 compute-2 sudo[264417]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:53 compute-2 sudo[264442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:53 compute-2 sudo[264442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:53 compute-2 sudo[264442]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:54 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:54 compute-2 ceph-mon[77081]: pgmap v2616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 MiB/s wr, 18 op/s
Jan 22 14:54:54 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:54 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:54.477+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:54.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:55 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:55.485+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:56 compute-2 ceph-mon[77081]: pgmap v2617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 MiB/s wr, 18 op/s
Jan 22 14:54:56 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:56.518+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:54:57.186 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:54:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:54:57.188 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:54:57 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:57.475+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:57 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:57 compute-2 sudo[264469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:54:57 compute-2 sudo[264469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:57 compute-2 sudo[264469]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:57 compute-2 sudo[264494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:54:57 compute-2 sudo[264494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:54:57 compute-2 sudo[264494]: pam_unix(sudo:session): session closed for user root
Jan 22 14:54:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:57.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:58.429+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:58 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:54:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:54:58 compute-2 ceph-mon[77081]: pgmap v2618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 22 14:54:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:54:58 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:59 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:54:59.191 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:54:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:59.474+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:54:59 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:54:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:59 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:54:59 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:54:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:54:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:54:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:59.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:00.462+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:00 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:00.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:01 compute-2 ceph-mon[77081]: pgmap v2619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:01 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:01 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:01.421+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:01 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:02 compute-2 ceph-mon[77081]: pgmap v2620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:02 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:02.455+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:02 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:02.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:03.489+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:03 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:04.505+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:04 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:04 compute-2 ceph-mon[77081]: pgmap v2621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:04 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:04 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:05.555+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:05 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:05 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:05.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:06.584+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:06 compute-2 ceph-mon[77081]: pgmap v2622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:06 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:07.541+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:07 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:07 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:08.516+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:08 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:08 compute-2 ceph-mon[77081]: pgmap v2623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:08 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:08 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:09.470+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:09 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:09 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:10 compute-2 podman[264525]: 2026-01-22 14:55:10.007735008 +0000 UTC m=+0.062761985 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 14:55:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:10.485+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:10 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:10.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:10 compute-2 ceph-mon[77081]: pgmap v2624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:10 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:11.509+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:11 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:11 compute-2 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 14:55:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:12.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:12.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:12.555+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:12 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:12 compute-2 ceph-mon[77081]: pgmap v2625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:12 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:13.509+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:13 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:13 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:13 compute-2 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:13 compute-2 sudo[264547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:13 compute-2 sudo[264547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:13 compute-2 sudo[264547]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:13 compute-2 sudo[264572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:14 compute-2 sudo[264572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:14 compute-2 sudo[264572]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:14.473+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:14 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:14.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:14 compute-2 ceph-mon[77081]: pgmap v2626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:14 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:15.503+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:15 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:15 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:16.489+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:16 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:16 compute-2 ceph-mon[77081]: pgmap v2627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:16 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:17.471+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:17 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:17 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:18.490+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:18 compute-2 ceph-mon[77081]: pgmap v2628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:18 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1400842831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:55:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1400842831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:55:18 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:19 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:19.511+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:19 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:20 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:20.553+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:20 compute-2 ceph-mon[77081]: pgmap v2629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:20 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:21 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:21.563+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:21 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:21 compute-2 ceph-mon[77081]: pgmap v2630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:22.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:22.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:22 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:22.585+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:22 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:23 compute-2 podman[264602]: 2026-01-22 14:55:23.025379106 +0000 UTC m=+0.090160039 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:55:23 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:23.571+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:23 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:23 compute-2 ceph-mon[77081]: pgmap v2631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:23 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:24.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:24.551+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:24 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:24 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:25.564+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:25 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:25 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:25 compute-2 ceph-mon[77081]: pgmap v2632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:26.566+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:26 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:26 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:26 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:27.557+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:27 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:28 compute-2 ceph-mon[77081]: pgmap v2633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:28 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:28.521+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:28 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:55:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 79K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1889 writes, 9643 keys, 1889 commit groups, 1.0 writes per commit group, ingest: 16.44 MB, 0.03 MB/s
                                           Interval WAL: 1889 writes, 1889 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     84.5      1.02              0.31        51    0.020       0      0       0.0       0.0
                                             L6      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.5    139.6    120.3      3.91              1.54        50    0.078    453K    26K       0.0       0.0
                                            Sum      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.5    110.8    112.9      4.93              1.86       101    0.049    453K    26K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    129.7    129.6      0.64              0.31        14    0.046     89K   3619       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    139.6    120.3      3.91              1.54        50    0.078    453K    26K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     84.7      1.01              0.31        50    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.084, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.54 GB write, 0.12 MB/s write, 0.53 GB read, 0.11 MB/s read, 4.9 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 57.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000398 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3060,55.08 MB,18.1191%) FilterBlock(101,1.23 MB,0.403088%) IndexBlock(101,1.63 MB,0.536402%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 14:55:29 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:29 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:29.516+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:29 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:30 compute-2 ceph-mon[77081]: pgmap v2634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:30 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:30.515+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:30 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:31 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:31.485+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:32 compute-2 ceph-mon[77081]: pgmap v2635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:32 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:32.530+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:32 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:33 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:33.533+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:33 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 14:55:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:34.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 14:55:34 compute-2 sudo[264633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:34 compute-2 sudo[264633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:34 compute-2 sudo[264633]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:34 compute-2 sudo[264658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:34 compute-2 sudo[264658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:34 compute-2 sudo[264658]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:34.533+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:34 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:34.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:34 compute-2 ceph-mon[77081]: pgmap v2636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:34 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:34 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:35.514+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:35 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:35 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:36.489+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:36 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:37 compute-2 ceph-mon[77081]: pgmap v2637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:37 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:37.457+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:37 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:38.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:38 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-2 ceph-mon[77081]: pgmap v2638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:38 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:38.447+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 14:55:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 14:55:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:39.400+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:39 compute-2 sshd-session[264686]: Invalid user ubuntu from 45.148.10.240 port 55956
Jan 22 14:55:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:39 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:39 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:39 compute-2 sshd-session[264686]: Connection closed by invalid user ubuntu 45.148.10.240 port 55956 [preauth]
Jan 22 14:55:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:40.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:40.395+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:40 compute-2 ceph-mon[77081]: pgmap v2639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:55:40 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:40 compute-2 podman[264689]: 2026-01-22 14:55:40.986986854 +0000 UTC m=+0.052132769 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:55:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:41.366+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:41 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:41 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:55:41.937 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:55:41 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:55:41.937 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:55:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:42.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:42.386+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:42 compute-2 ceph-mon[77081]: pgmap v2640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:55:42 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:43.436+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:44 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:44 compute-2 ceph-mon[77081]: pgmap v2641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 14:55:44 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:44 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:44.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:44.440+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:44 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:55:44.940 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:55:45 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:45.425+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:46 compute-2 ceph-mon[77081]: pgmap v2642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:46 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:46 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:46.433+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:47 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:55:47.233 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:55:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:55:47.233 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:55:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:55:47.234 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:55:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:47.439+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:48.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:48 compute-2 ceph-mon[77081]: pgmap v2643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:48 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:48.402+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:49 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:49 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4737 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:49.444+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:50 compute-2 ceph-mon[77081]: pgmap v2644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:50 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:50.399+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:51 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:51.427+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:52.392+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:52 compute-2 ceph-mon[77081]: pgmap v2645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 425 B/s rd, 340 B/s wr, 0 op/s
Jan 22 14:55:52 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:53.354+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:53 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:55:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:55:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:55:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:55:54 compute-2 podman[264714]: 2026-01-22 14:55:54.063713236 +0000 UTC m=+0.111418115 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 14:55:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:55:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:54.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:55:54 compute-2 sudo[264740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:54 compute-2 sudo[264740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:54 compute-2 sudo[264740]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:54 compute-2 sudo[264765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:54 compute-2 sudo[264765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:54 compute-2 sudo[264765]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:54.377+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:54 compute-2 ceph-mon[77081]: pgmap v2646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 14:55:54 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:55:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:55:54 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4742 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:55.332+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:55 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:56.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:56.304+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:56 compute-2 ceph-mon[77081]: pgmap v2647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 340 B/s wr, 1 op/s
Jan 22 14:55:56 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:56.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:57 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:57.338+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:57 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:57 compute-2 sudo[264792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:57 compute-2 sudo[264792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:57 compute-2 sudo[264792]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:58 compute-2 sudo[264817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:55:58 compute-2 sudo[264817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:58 compute-2 sudo[264817]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:55:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:55:58 compute-2 sudo[264842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:55:58 compute-2 sudo[264842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:58 compute-2 sudo[264842]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:58 compute-2 sudo[264867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:55:58 compute-2 sudo[264867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:55:58 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:58.346+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:58 compute-2 ceph-mon[77081]: pgmap v2648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:55:58 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:55:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:55:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:58.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:55:59 compute-2 sudo[264867]: pam_unix(sudo:session): session closed for user root
Jan 22 14:55:59 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:59.316+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:55:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:55:59 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:55:59 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4747 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:55:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:00 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:00.351+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:00.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:01 compute-2 ceph-mon[77081]: pgmap v2649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:01 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:56:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:56:01 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:01.320+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:02 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:02 compute-2 ceph-mon[77081]: pgmap v2650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:56:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:56:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:56:02 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:02 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:02.343+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:03.342+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:03 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:04 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:04.309+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:04.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:04 compute-2 ceph-mon[77081]: pgmap v2651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:04 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:04 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:05 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:05.351+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:05 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:06.316+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:06 compute-2 ceph-mon[77081]: pgmap v2652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 14:56:06 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:07 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:07.281+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:07 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:07 compute-2 ceph-mon[77081]: pgmap v2653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 11 op/s
Jan 22 14:56:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:08 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:08.305+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:08.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:08 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:08 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:08 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:09 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:09.298+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:10 compute-2 ceph-mon[77081]: pgmap v2654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:10 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:10.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:10 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:10.272+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:10.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:10 compute-2 sudo[264929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:10 compute-2 sudo[264929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:10 compute-2 sudo[264929]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:10 compute-2 sudo[264954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:56:10 compute-2 sudo[264954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:10 compute-2 sudo[264954]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:11 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:11.279+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:11 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:56:12 compute-2 podman[264980]: 2026-01-22 14:56:12.011152579 +0000 UTC m=+0.064374787 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:56:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:12 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:12.329+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:12 compute-2 ceph-mon[77081]: pgmap v2655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:12 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:12.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:13.338+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:13 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:13 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:14 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:14.298+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:14 compute-2 ceph-mon[77081]: pgmap v2656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:14 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:14 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:14 compute-2 sudo[265001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:14 compute-2 sudo[265001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:14 compute-2 sudo[265001]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:14 compute-2 sudo[265026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:14 compute-2 sudo[265026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:14 compute-2 sudo[265026]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:15 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:15.326+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:15 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:16.281+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:16 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:16 compute-2 ceph-mon[77081]: pgmap v2657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:16 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:17.235+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:17 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:17 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:18.279+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:18 compute-2 ceph-mon[77081]: pgmap v2658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:18 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/680053100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:56:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/680053100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:56:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:19.309+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:19 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:20.275+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:20 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:21.315+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:21 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:22.349+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:22 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:22.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:23.319+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:23 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:23 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:23 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:23 compute-2 ceph-mon[77081]: pgmap v2659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:23 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:24 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:56:24.140 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:56:24 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:56:24.141 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:56:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:24.313+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:24 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:24 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-2 ceph-mon[77081]: pgmap v2660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:24 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:24 compute-2 ceph-mon[77081]: pgmap v2661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:24 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:25 compute-2 podman[265057]: 2026-01-22 14:56:25.060248537 +0000 UTC m=+0.114400694 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 14:56:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:25.272+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:25 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:26.259+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:26 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:26 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:26 compute-2 ceph-mon[77081]: pgmap v2662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:26 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:27.231+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:27 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:27 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:28.203+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:28 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:28 compute-2 ceph-mon[77081]: pgmap v2663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:28 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:28 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:29.184+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:29 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:29 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:29 compute-2 ceph-mon[77081]: pgmap v2664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:29 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:30.185+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:30 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:31 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:31.188+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 14:56:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.5 total, 600.0 interval
                                           Cumulative writes: 11K writes, 38K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3411 syncs, 3.26 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 875 writes, 1364 keys, 875 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s
                                           Interval WAL: 875 writes, 419 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 14:56:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:32.181+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:32 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:32 compute-2 ceph-mon[77081]: pgmap v2665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:32 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.397699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792397752, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2030, "num_deletes": 251, "total_data_size": 3988875, "memory_usage": 4042784, "flush_reason": "Manual Compaction"}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792414756, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2598388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78704, "largest_seqno": 80729, "table_properties": {"data_size": 2590610, "index_size": 4335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19959, "raw_average_key_size": 21, "raw_value_size": 2573537, "raw_average_value_size": 2749, "num_data_blocks": 186, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093653, "oldest_key_time": 1769093653, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 17109 microseconds, and 5868 cpu microseconds.
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414812) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2598388 bytes OK
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414836) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416367) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416387) EVENT_LOG_v1 {"time_micros": 1769093792416380, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3979490, prev total WAL file size 3979490, number of live WAL files 2.
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.418062) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2537KB)], [162(10MB)]
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792418112, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 13241354, "oldest_snapshot_seqno": -1}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 12758 keys, 11618320 bytes, temperature: kUnknown
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792479450, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 11618320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11548234, "index_size": 37077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 350844, "raw_average_key_size": 27, "raw_value_size": 11330398, "raw_average_value_size": 888, "num_data_blocks": 1348, "num_entries": 12758, "num_filter_entries": 12758, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.479699) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11618320 bytes
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.480895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.6 rd, 189.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.1 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.5) OK, records in: 13275, records dropped: 517 output_compression: NoCompression
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.480911) EVENT_LOG_v1 {"time_micros": 1769093792480903, "job": 104, "event": "compaction_finished", "compaction_time_micros": 61418, "compaction_time_cpu_micros": 27182, "output_level": 6, "num_output_files": 1, "total_output_size": 11618320, "num_input_records": 13275, "num_output_records": 12758, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792481539, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792483507, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.417963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:56:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:32.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:33.221+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:33 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:33 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:34 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:56:34.144 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:56:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:34.204+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:34 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:34 compute-2 sudo[265089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:34 compute-2 sudo[265089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:34 compute-2 sudo[265089]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:34 compute-2 sudo[265114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:34 compute-2 sudo[265114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:34 compute-2 sudo[265114]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:34 compute-2 ceph-mon[77081]: pgmap v2666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:34 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:34 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:35.164+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:35 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:35 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:36.124+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:36 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:36.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:36 compute-2 ceph-mon[77081]: pgmap v2667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:36 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:37.155+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:37 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:38.139+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:38 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:38 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:38 compute-2 ceph-mon[77081]: pgmap v2668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:39.103+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:39 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 14:56:39 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:39 compute-2 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:40.068+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:40.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:40 compute-2 ceph-mon[77081]: pgmap v2669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:40 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:40.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:41.048+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:41 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:42.021+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:42.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:42 compute-2 ceph-mon[77081]: pgmap v2670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:42 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:42.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:43.037+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:43 compute-2 podman[265144]: 2026-01-22 14:56:43.061538977 +0000 UTC m=+0.098164347 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 14:56:43 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:44.069+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:44.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:45.021+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:45 compute-2 ceph-mon[77081]: pgmap v2671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:45 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:45 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 4793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:45.979+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:46.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:46 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:46 compute-2 ceph-mon[77081]: pgmap v2672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:46 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:47.023+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:56:47.234 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:56:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:56:47.234 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:56:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:56:47.235 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:56:47 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:48.063+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:49 compute-2 ceph-mon[77081]: pgmap v2673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:49 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:49.078+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:50.042+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:50.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:50 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 14:56:50 compute-2 ceph-mon[77081]: pgmap v2674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:50 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 4798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:50 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:50.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:51.025+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:51 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:52.014+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:52.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:56:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:56:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:53.047+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:53 compute-2 ceph-mon[77081]: pgmap v2675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:53 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:53 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:54.098+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:54.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:54 compute-2 ceph-mon[77081]: pgmap v2676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:54 compute-2 ceph-mon[77081]: Health check update: 47 slow ops, oldest one blocked for 4803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:56:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:56:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:54 compute-2 sudo[265168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:54 compute-2 sudo[265168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:54 compute-2 sudo[265168]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:54 compute-2 sudo[265193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:56:54 compute-2 sudo[265193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:56:54 compute-2 sudo[265193]: pam_unix(sudo:session): session closed for user root
Jan 22 14:56:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:55.052+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:55 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:55 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:56.040+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:56 compute-2 podman[265219]: 2026-01-22 14:56:56.061173012 +0000 UTC m=+0.116364126 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:56:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:56.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:56 compute-2 ceph-mon[77081]: pgmap v2677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:56 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:56:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:56:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:57.001+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:57 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:57 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:58.042+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:58 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:56:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:56:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:58.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:56:58 compute-2 ceph-mon[77081]: pgmap v2678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:56:58 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:59.015+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:59 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:56:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:56:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:00.056+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:00 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:00.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:00 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 14:57:00 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:00 compute-2 ceph-mon[77081]: Health check update: 47 slow ops, oldest one blocked for 4808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:00 compute-2 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 14:57:01 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:01.025+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:01 compute-2 ceph-mon[77081]: pgmap v2679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 22 14:57:01 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:01 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:02 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:02.024+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:02 compute-2 ceph-mon[77081]: pgmap v2680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 14:57:02 compute-2 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 14:57:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:02.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:03.019+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:03 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:03.990+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:03 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:04.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:04 compute-2 ceph-mon[77081]: pgmap v2681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 22 14:57:04 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:04 compute-2 ceph-mon[77081]: Health check update: 47 slow ops, oldest one blocked for 4813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:04.959+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:04 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:05 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:05 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:06.004+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:06 compute-2 ceph-mon[77081]: pgmap v2682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 0 B/s wr, 73 op/s
Jan 22 14:57:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:06.970+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:06 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:07 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:07.933+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:07 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:08 compute-2 ceph-mon[77081]: pgmap v2683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 14:57:08 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:08 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:08.971+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:08 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:09 compute-2 ceph-mon[77081]: pgmap v2684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 14:57:09 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:09.976+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:09 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:10 compute-2 sudo[265253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:10 compute-2 sudo[265253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:10 compute-2 sudo[265253]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:10.944+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:10 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:10 compute-2 sudo[265278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:57:10 compute-2 sudo[265278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:10 compute-2 sudo[265278]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:11 compute-2 sudo[265303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:11 compute-2 sudo[265303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:11 compute-2 sudo[265303]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:11 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:11 compute-2 sudo[265328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:57:11 compute-2 sudo[265328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:11 compute-2 sudo[265328]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:11.967+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:11 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:12.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:12 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-2 ceph-mon[77081]: pgmap v2685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Jan 22 14:57:12 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:12 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:12.961+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:12 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:57:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:57:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:57:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:57:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:57:13 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:13.971+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:13 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:14 compute-2 podman[265385]: 2026-01-22 14:57:14.021614846 +0000 UTC m=+0.073488116 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 14:57:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:14 compute-2 ceph-mon[77081]: pgmap v2686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 22 14:57:14 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:14 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:14.950+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:14 compute-2 sudo[265405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:14 compute-2 sudo[265405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:14 compute-2 sudo[265405]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:15 compute-2 sudo[265430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:15 compute-2 sudo[265430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:15 compute-2 sudo[265430]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:15 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:15 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:15 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:15.944+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:16.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:16 compute-2 ceph-mon[77081]: pgmap v2687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 22 14:57:16 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:16 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:16.923+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:17 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:17.896+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:17 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:18.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:57:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:57:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:57:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:57:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:18 compute-2 ceph-mon[77081]: pgmap v2688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s
Jan 22 14:57:18 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:57:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:57:18 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:18.920+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:19 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:19.871+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:19 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:19 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:57:20 compute-2 sudo[265457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:20 compute-2 sudo[265457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:20 compute-2 sudo[265457]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:20 compute-2 sudo[265482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:57:20 compute-2 sudo[265482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:20 compute-2 sudo[265482]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:20 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:20.844+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:20 compute-2 ceph-mon[77081]: pgmap v2689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:20 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:21 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:21.834+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:21 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:21 compute-2 ceph-mon[77081]: pgmap v2690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:22 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:22.815+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:23 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:23 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:23 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:23.793+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:24 compute-2 ceph-mon[77081]: pgmap v2691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:24 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:24 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:24.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:24 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:24.761+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:25 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:25 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:25.787+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:26 compute-2 ceph-mon[77081]: pgmap v2692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:26 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:26 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:57:26.363 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:57:26 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:57:26.364 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:57:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:26.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:26 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:26.759+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:27 compute-2 podman[265511]: 2026-01-22 14:57:27.035892245 +0000 UTC m=+0.089868538 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 14:57:27 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:27 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:27.766+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:28 compute-2 ceph-mon[77081]: pgmap v2693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:28 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:28 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:28.729+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:29 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:57:29.365 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:57:29 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:29 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:29.695+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:29 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:30.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:30 compute-2 ceph-mon[77081]: pgmap v2694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:30 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:30.669+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:30 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:30.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:31 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 14:57:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:31.718+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:31 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:32.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:32 compute-2 ceph-mon[77081]: pgmap v2695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:32 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:32.721+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:32 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:33 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:33 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:33.688+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:34 compute-2 ceph-mon[77081]: pgmap v2696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:34 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:34 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:34 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:34.648+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:35 compute-2 sudo[265543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:35 compute-2 sudo[265543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:35 compute-2 sudo[265543]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:35 compute-2 sudo[265568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:35 compute-2 sudo[265568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:35 compute-2 sudo[265568]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:35 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:35 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:35.666+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:36 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:36.653+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:36 compute-2 ceph-mon[77081]: pgmap v2697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:36 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:37 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:37.666+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:37 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:38 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:38.712+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:38 compute-2 ceph-mon[77081]: pgmap v2698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:38 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:39.758+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:39 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:39 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:39 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:40.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:40.799+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:40 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:40 compute-2 ceph-mon[77081]: pgmap v2699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:40 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:41.776+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:41 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:41 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:41 compute-2 ceph-mon[77081]: pgmap v2700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:42.809+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:42 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:42 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:43.810+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:43 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:43 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:43 compute-2 ceph-mon[77081]: pgmap v2701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:44.835+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:44 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:44 compute-2 podman[265598]: 2026-01-22 14:57:44.991601386 +0000 UTC m=+0.057175447 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 14:57:45 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:45 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:45.882+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:45 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:46 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:46 compute-2 ceph-mon[77081]: pgmap v2702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:46 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:46.839+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:46 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:47 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:57:47.235 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:57:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:57:47.236 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:57:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:57:47.236 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:57:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:47.819+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:47 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:48 compute-2 ceph-mon[77081]: pgmap v2703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:48 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:48.825+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:48 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:49 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:49 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:49.871+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:49 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:50 compute-2 ceph-mon[77081]: pgmap v2704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:50 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:50.830+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:50 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:51 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:51.850+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:51 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:52.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:52.879+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:52 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:52 compute-2 ceph-mon[77081]: pgmap v2705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:52 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:53 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:53 compute-2 ceph-mon[77081]: pgmap v2706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:53.906+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:53 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 14:57:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:54.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 14:57:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:54.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:54.888+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:54 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:54 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:54 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:57:55 compute-2 sudo[265622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:55 compute-2 sudo[265622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:55 compute-2 sudo[265622]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:55 compute-2 sudo[265647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:57:55 compute-2 sudo[265647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:57:55 compute-2 sudo[265647]: pam_unix(sudo:session): session closed for user root
Jan 22 14:57:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:55.843+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:55 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:55 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:55 compute-2 ceph-mon[77081]: pgmap v2707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:56.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:57:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:56.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:57:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:56.824+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:56 compute-2 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:56 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 22 14:57:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:57.832+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:57 compute-2 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:57:57 compute-2 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 14:57:57 compute-2 ceph-mon[77081]: pgmap v2708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:57:57 compute-2 ceph-mon[77081]: osdmap e163: 3 total, 3 up, 3 in
Jan 22 14:57:58 compute-2 podman[265673]: 2026-01-22 14:57:58.070132457 +0000 UTC m=+0.126612635 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 14:57:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:57:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:57:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:58.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:57:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:58.800+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:58 compute-2 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:57:59 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.022892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879022960, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 250, "total_data_size": 2526993, "memory_usage": 2568984, "flush_reason": "Manual Compaction"}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879034291, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1060655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80734, "largest_seqno": 82104, "table_properties": {"data_size": 1056089, "index_size": 1833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14035, "raw_average_key_size": 21, "raw_value_size": 1045330, "raw_average_value_size": 1615, "num_data_blocks": 80, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093793, "oldest_key_time": 1769093793, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 11494 microseconds, and 6219 cpu microseconds.
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.034388) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1060655 bytes OK
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.034411) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.036386) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.036412) EVENT_LOG_v1 {"time_micros": 1769093879036403, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.036436) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2520358, prev total WAL file size 2520358, number of live WAL files 2.
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1035KB)], [165(11MB)]
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879037814, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 12678975, "oldest_snapshot_seqno": -1}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 12926 keys, 9372578 bytes, temperature: kUnknown
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879269405, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 9372578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9305328, "index_size": 33857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32325, "raw_key_size": 355103, "raw_average_key_size": 27, "raw_value_size": 9088417, "raw_average_value_size": 703, "num_data_blocks": 1214, "num_entries": 12926, "num_filter_entries": 12926, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.269803) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9372578 bytes
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.323663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.7 rd, 40.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 13405, records dropped: 479 output_compression: NoCompression
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.323696) EVENT_LOG_v1 {"time_micros": 1769093879323681, "job": 106, "event": "compaction_finished", "compaction_time_micros": 231720, "compaction_time_cpu_micros": 56240, "output_level": 6, "num_output_files": 1, "total_output_size": 9372578, "num_input_records": 13405, "num_output_records": 12926, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879324379, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879328661, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:57:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:57:59 compute-2 sshd-session[265698]: Invalid user sdadmin from 45.148.10.240 port 38634
Jan 22 14:57:59 compute-2 sshd-session[265698]: Connection closed by invalid user sdadmin 45.148.10.240 port 38634 [preauth]
Jan 22 14:57:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:59.841+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:59 compute-2 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:57:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:00 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:00 compute-2 ceph-mon[77081]: pgmap v2710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Jan 22 14:58:00 compute-2 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:00 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:00.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:00.847+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:00 compute-2 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 22 14:58:01 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:01.812+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:01 compute-2 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:02 compute-2 ceph-mon[77081]: pgmap v2711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 22 14:58:02 compute-2 ceph-mon[77081]: osdmap e164: 3 total, 3 up, 3 in
Jan 22 14:58:02 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:02.776+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:02 compute-2 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:03 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:03.750+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:03 compute-2 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:04 compute-2 ceph-mon[77081]: pgmap v2713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Jan 22 14:58:04 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:04 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:04.799+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:04 compute-2 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:05 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:05.783+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:05 compute-2 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:06 compute-2 ceph-mon[77081]: pgmap v2714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 3.1 KiB/s wr, 23 op/s
Jan 22 14:58:06 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 22 14:58:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:06.808+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:06 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:07 compute-2 ceph-mon[77081]: osdmap e165: 3 total, 3 up, 3 in
Jan 22 14:58:07 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:07.767+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:07 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:08.730+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:08 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:08 compute-2 ceph-mon[77081]: pgmap v2716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.1 KiB/s wr, 20 op/s
Jan 22 14:58:08 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:09 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:09 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:09.770+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:09 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:10.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:10.790+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:10 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:10 compute-2 ceph-mon[77081]: pgmap v2717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.4 KiB/s wr, 20 op/s
Jan 22 14:58:10 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:11 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:11.811+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:11 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:12.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:12.780+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:12 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:12 compute-2 ceph-mon[77081]: pgmap v2718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Jan 22 14:58:12 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:13.762+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:13 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:13 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:14.724+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:14 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:14.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:14 compute-2 ceph-mon[77081]: pgmap v2719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Jan 22 14:58:14 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:14 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:15 compute-2 sudo[265708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:15 compute-2 sudo[265708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:15 compute-2 sudo[265708]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:15 compute-2 sudo[265734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:15 compute-2 sudo[265734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:15 compute-2 sudo[265734]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:15 compute-2 podman[265732]: 2026-01-22 14:58:15.65772435 +0000 UTC m=+0.077967683 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 22 14:58:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:15.689+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:15 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:15 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:16.660+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:16 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:16.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:16 compute-2 ceph-mon[77081]: pgmap v2720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 26 op/s
Jan 22 14:58:16 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:17.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:17 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:17 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:17 compute-2 ceph-mon[77081]: pgmap v2721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Jan 22 14:58:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:18.602+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:18 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:18.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:18 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2214599222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:58:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2214599222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:58:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:19.624+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:19 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:19 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:19 compute-2 ceph-mon[77081]: pgmap v2722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Jan 22 14:58:19 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:20 compute-2 sudo[265776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:20 compute-2 sudo[265776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:20 compute-2 sudo[265776]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-2 sudo[265801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:58:20 compute-2 sudo[265801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-2 sudo[265801]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-2 sudo[265826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:20 compute-2 sudo[265826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-2 sudo[265826]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-2 sudo[265851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:58:20 compute-2 sudo[265851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:20.636+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:20 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:20 compute-2 sudo[265851]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:20 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:20 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:21.596+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:21 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:21 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:21 compute-2 ceph-mon[77081]: pgmap v2723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 22 14:58:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:58:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:58:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:58:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:58:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:58:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:22.567+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:22 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:22.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:22 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:23.580+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:23 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:24 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:24 compute-2 ceph-mon[77081]: pgmap v2724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:24.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:24 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:24.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:25 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:25 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4893 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:25 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:25.647+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:25 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:26 compute-2 ceph-mon[77081]: pgmap v2725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:26 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:26.639+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:26 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:27 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:27.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:27 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:28 compute-2 ceph-mon[77081]: pgmap v2726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:28 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:28.608+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:28 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:28.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:29 compute-2 podman[265913]: 2026-01-22 14:58:29.132252784 +0000 UTC m=+0.169097727 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 14:58:29 compute-2 sudo[265939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:29 compute-2 sudo[265939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:29 compute-2 sudo[265939]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:29 compute-2 sudo[265964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:58:29 compute-2 sudo[265964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:29 compute-2 sudo[265964]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:29.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:29 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:29 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:29 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:58:29 compute-2 ceph-mon[77081]: pgmap v2727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:29 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:30.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:30.657+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:30 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:30.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:31 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:31 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:31 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:58:31.487 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:58:31 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:58:31.490 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:58:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:31.621+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:31 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:32 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:58:32.493 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:58:32 compute-2 ceph-mon[77081]: pgmap v2728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:32 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:32.608+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:32 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:33 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:33.587+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:33 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:34.555+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:34 compute-2 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:34 compute-2 ceph-mon[77081]: pgmap v2729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:34 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:34 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 22 14:58:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:35 compute-2 ceph-osd[79779]: osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:35.514+0000 7f47f8ed4640 -1 osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:35 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:35 compute-2 ceph-mon[77081]: osdmap e166: 3 total, 3 up, 3 in
Jan 22 14:58:35 compute-2 sudo[265992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:35 compute-2 sudo[265992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:35 compute-2 sudo[265992]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:35 compute-2 sudo[266017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:35 compute-2 sudo[266017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:35 compute-2 sudo[266017]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:36 compute-2 ceph-osd[79779]: osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:36.502+0000 7f47f8ed4640 -1 osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 22 14:58:36 compute-2 ceph-mon[77081]: pgmap v2731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 102 B/s rd, 0 op/s
Jan 22 14:58:36 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:37 compute-2 ceph-osd[79779]: osd.2 167 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:37.512+0000 7f47f8ed4640 -1 osd.2 167 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:37 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:37 compute-2 ceph-mon[77081]: osdmap e167: 3 total, 3 up, 3 in
Jan 22 14:58:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 22 14:58:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:38 compute-2 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:38.540+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:38.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:38 compute-2 ceph-mon[77081]: pgmap v2733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Jan 22 14:58:38 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:38 compute-2 ceph-mon[77081]: osdmap e168: 3 total, 3 up, 3 in
Jan 22 14:58:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:39 compute-2 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:39.550+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:39 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:39 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:40 compute-2 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:40.511+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:40 compute-2 ceph-mon[77081]: pgmap v2735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 5.0 KiB/s wr, 89 op/s
Jan 22 14:58:40 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:41.489+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:41 compute-2 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:41 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:42.468+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:42 compute-2 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:42 compute-2 ceph-mon[77081]: pgmap v2736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 6.6 KiB/s wr, 117 op/s
Jan 22 14:58:42 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:43.420+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:43 compute-2 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:43 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 22 14:58:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:44.447+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:44 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:44.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:45 compute-2 ceph-mon[77081]: pgmap v2737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 5.2 KiB/s wr, 93 op/s
Jan 22 14:58:45 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:45 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:45 compute-2 ceph-mon[77081]: osdmap e169: 3 total, 3 up, 3 in
Jan 22 14:58:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:45.476+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:45 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:46 compute-2 podman[266048]: 2026-01-22 14:58:46.027888286 +0000 UTC m=+0.075672835 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 14:58:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:46.470+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:46 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:46 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:46 compute-2 ceph-mon[77081]: pgmap v2739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Jan 22 14:58:46 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:58:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:58:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:58:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:58:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:58:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:58:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:47.502+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:47 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:47 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:48.461+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:48 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:48 compute-2 ceph-mon[77081]: pgmap v2740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.0 KiB/s wr, 54 op/s
Jan 22 14:58:48 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:49.494+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:49 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:49 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:49 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:50.531+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:50 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:50 compute-2 ceph-mon[77081]: pgmap v2741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Jan 22 14:58:50 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:51.524+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:51 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:51 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:58:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:58:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:52.518+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:52 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:52 compute-2 ceph-mon[77081]: pgmap v2742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:52 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:53.525+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:53 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:53 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.047935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934047977, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1060, "num_deletes": 259, "total_data_size": 1747541, "memory_usage": 1778784, "flush_reason": "Manual Compaction"}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934062852, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1147709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82109, "largest_seqno": 83164, "table_properties": {"data_size": 1143055, "index_size": 2113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11768, "raw_average_key_size": 20, "raw_value_size": 1132988, "raw_average_value_size": 1970, "num_data_blocks": 90, "num_entries": 575, "num_filter_entries": 575, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093880, "oldest_key_time": 1769093880, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 15703 microseconds, and 8262 cpu microseconds.
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.063628) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1147709 bytes OK
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.063676) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.065584) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.065622) EVENT_LOG_v1 {"time_micros": 1769093934065610, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.065648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1742142, prev total WAL file size 1742142, number of live WAL files 2.
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.066905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1120KB)], [168(9152KB)]
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934066955, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 10520287, "oldest_snapshot_seqno": -1}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 12966 keys, 10366360 bytes, temperature: kUnknown
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934154035, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10366360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10297542, "index_size": 35297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32453, "raw_key_size": 357288, "raw_average_key_size": 27, "raw_value_size": 10078662, "raw_average_value_size": 777, "num_data_blocks": 1270, "num_entries": 12966, "num_filter_entries": 12966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.154418) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10366360 bytes
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.156195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.7 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.9 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.2) write-amplify(9.0) OK, records in: 13501, records dropped: 535 output_compression: NoCompression
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.156219) EVENT_LOG_v1 {"time_micros": 1769093934156209, "job": 108, "event": "compaction_finished", "compaction_time_micros": 87166, "compaction_time_cpu_micros": 39199, "output_level": 6, "num_output_files": 1, "total_output_size": 10366360, "num_input_records": 13501, "num_output_records": 12966, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934156652, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934158982, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.066740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:58:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:54.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:54.526+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:54 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:58:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:54.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:58:54 compute-2 ceph-mon[77081]: pgmap v2743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:54 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:54 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:58:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:55.536+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:55 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:55 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:55 compute-2 sudo[266072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:55 compute-2 sudo[266072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:55 compute-2 sudo[266072]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:55 compute-2 sudo[266097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:58:55 compute-2 sudo[266097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:58:55 compute-2 sudo[266097]: pam_unix(sudo:session): session closed for user root
Jan 22 14:58:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:56.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:56.573+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:56 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:56.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:56 compute-2 ceph-mon[77081]: pgmap v2744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:56 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:57.577+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:57 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:57 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:57 compute-2 ceph-mon[77081]: pgmap v2745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:58.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:58.575+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:58 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:58:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:58:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:58:58 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:58:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:59.564+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:59 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:58:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:59 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:58:59 compute-2 ceph-mon[77081]: pgmap v2746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:58:59 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:00 compute-2 podman[266124]: 2026-01-22 14:59:00.033351057 +0000 UTC m=+0.083250177 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 14:59:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:00.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:00.520+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:00 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:00.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:00 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:01.532+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:01 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:01 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:01 compute-2 ceph-mon[77081]: pgmap v2747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:02.573+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:02 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:02.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:02 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:03.524+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:03 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:04 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:04 compute-2 ceph-mon[77081]: pgmap v2748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:04 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:04.517+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:04 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:04.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:05 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:05 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:05.492+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:05 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:06 compute-2 ceph-mon[77081]: pgmap v2749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:06 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:06.448+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:06 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:07 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:07.484+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:07 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:08 compute-2 ceph-mon[77081]: pgmap v2750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:08 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:08.524+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:08 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:08.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:09 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:09 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:09.488+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:09 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:10 compute-2 ceph-mon[77081]: pgmap v2751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:10 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:10.486+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:10 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:10.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:11 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:11.462+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:11 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:12 compute-2 ceph-mon[77081]: pgmap v2752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:12 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:12.464+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:12 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:13 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:13.459+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:13 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:14 compute-2 ceph-mon[77081]: pgmap v2753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:14 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:14 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:14.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:14.445+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:14 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:14.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:15 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:15.488+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:15 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:16 compute-2 sudo[266160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:16 compute-2 sudo[266160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:16 compute-2 sudo[266160]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:16 compute-2 sudo[266191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:16 compute-2 sudo[266191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:16 compute-2 sudo[266191]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:16 compute-2 podman[266184]: 2026-01-22 14:59:16.24622336 +0000 UTC m=+0.097633621 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 22 14:59:16 compute-2 ceph-mon[77081]: pgmap v2754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:16 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:16.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:16.523+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:16 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:16.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:17 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.315124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957315162, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 556, "num_deletes": 251, "total_data_size": 660256, "memory_usage": 671352, "flush_reason": "Manual Compaction"}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957319743, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 432980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83170, "largest_seqno": 83720, "table_properties": {"data_size": 430246, "index_size": 705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7232, "raw_average_key_size": 19, "raw_value_size": 424544, "raw_average_value_size": 1141, "num_data_blocks": 31, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093934, "oldest_key_time": 1769093934, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 4670 microseconds, and 1572 cpu microseconds.
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.319797) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 432980 bytes OK
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.319908) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321294) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321324) EVENT_LOG_v1 {"time_micros": 1769093957321303, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 657001, prev total WAL file size 657001, number of live WAL files 2.
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(422KB)], [171(10123KB)]
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957321826, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 10799340, "oldest_snapshot_seqno": -1}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 12827 keys, 9181524 bytes, temperature: kUnknown
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957385551, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 9181524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9114522, "index_size": 33801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 355242, "raw_average_key_size": 27, "raw_value_size": 8898614, "raw_average_value_size": 693, "num_data_blocks": 1203, "num_entries": 12827, "num_filter_entries": 12827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.385842) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9181524 bytes
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.387407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 143.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(46.1) write-amplify(21.2) OK, records in: 13338, records dropped: 511 output_compression: NoCompression
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.387452) EVENT_LOG_v1 {"time_micros": 1769093957387435, "job": 110, "event": "compaction_finished", "compaction_time_micros": 63797, "compaction_time_cpu_micros": 23563, "output_level": 6, "num_output_files": 1, "total_output_size": 9181524, "num_input_records": 13338, "num_output_records": 12827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957387752, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957390393, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 14:59:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:17.521+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:17 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:18 compute-2 ceph-mon[77081]: pgmap v2755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:18 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 14:59:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:59:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 14:59:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:59:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:18.501+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:18 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 14:59:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 14:59:19 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:19 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:19.535+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:19 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:20 compute-2 ceph-mon[77081]: pgmap v2756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:20 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:20.535+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:20 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:20.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:21.514+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:21 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:21 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:22 compute-2 ceph-mon[77081]: pgmap v2757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:22 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:22 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:22.562+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:23 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:23.567+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:24 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:24.605+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:24 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:24 compute-2 ceph-mon[77081]: pgmap v2758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:24 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:24 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:24.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:25 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:25.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:25 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:26 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:26.656+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:26 compute-2 ceph-mon[77081]: pgmap v2759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:26 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:27 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:27.694+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:27 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:28.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:28 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:28.655+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:28.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:29.617+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:29 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:30 compute-2 ceph-mon[77081]: pgmap v2760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:30 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:30 compute-2 sudo[266234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:30 compute-2 sudo[266234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-2 sudo[266234]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-2 sudo[266261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:59:30 compute-2 sudo[266261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-2 sudo[266261]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-2 podman[266258]: 2026-01-22 14:59:30.222813974 +0000 UTC m=+0.093282110 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 14:59:30 compute-2 sudo[266306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:30 compute-2 sudo[266306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-2 sudo[266306]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-2 sudo[266335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 14:59:30 compute-2 sudo[266335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:30.572+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:30 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:30 compute-2 sudo[266335]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-2 sudo[266381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:30 compute-2 sudo[266381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-2 sudo[266381]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:30 compute-2 sudo[266407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:59:30 compute-2 sudo[266407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:30 compute-2 sudo[266407]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:31 compute-2 sudo[266432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:31 compute-2 sudo[266432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:31 compute-2 sudo[266432]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:31 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-2 ceph-mon[77081]: pgmap v2761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:31 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:31 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 14:59:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 14:59:31 compute-2 sudo[266457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 14:59:31 compute-2 sudo[266457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:31.578+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:31 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:31 compute-2 podman[266554]: 2026-01-22 14:59:31.617715343 +0000 UTC m=+0.068555155 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 14:59:31 compute-2 podman[266554]: 2026-01-22 14:59:31.707822662 +0000 UTC m=+0.158662474 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 14:59:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:32.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:32 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:59:32.547 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 14:59:32 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:59:32.548 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 14:59:32 compute-2 podman[266705]: 2026-01-22 14:59:32.586015703 +0000 UTC m=+0.082530418 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:59:32 compute-2 podman[266705]: 2026-01-22 14:59:32.600675794 +0000 UTC m=+0.097190509 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 14:59:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:32.623+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:32 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:32 compute-2 ceph-mon[77081]: pgmap v2762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:32 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:32 compute-2 podman[266772]: 2026-01-22 14:59:32.813200949 +0000 UTC m=+0.047342778 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, name=keepalived)
Jan 22 14:59:32 compute-2 podman[266772]: 2026-01-22 14:59:32.828529687 +0000 UTC m=+0.062671516 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, distribution-scope=public, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, description=keepalived for Ceph)
Jan 22 14:59:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:32 compute-2 sudo[266457]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:32 compute-2 sudo[266807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:33 compute-2 sudo[266807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:33 compute-2 sudo[266807]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:33 compute-2 sudo[266832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 14:59:33 compute-2 sudo[266832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:33 compute-2 sudo[266832]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:33 compute-2 sudo[266857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:33 compute-2 sudo[266857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:33 compute-2 sudo[266857]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:33 compute-2 sudo[266882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 14:59:33 compute-2 sudo[266882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:33.640+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:33 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:33 compute-2 sudo[266882]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:33 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:33 compute-2 ceph-mon[77081]: pgmap v2763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:59:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 14:59:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:34.667+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:34 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:34.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:34 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 14:59:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 14:59:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 14:59:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:35.642+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:35 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:35 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:35 compute-2 ceph-mon[77081]: pgmap v2764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:35 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:36 compute-2 sudo[266940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:36 compute-2 sudo[266940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:36 compute-2 sudo[266940]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:36 compute-2 sudo[266965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:36 compute-2 sudo[266965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:36 compute-2 sudo[266965]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:36.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:36.638+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:36 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:36.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:36 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:37.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:37 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:37 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:37 compute-2 ceph-mon[77081]: pgmap v2765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:38 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:59:38.551 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 14:59:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:38.621+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:38 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:39 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:39.589+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:39 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:40 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:40 compute-2 ceph-mon[77081]: pgmap v2766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:40 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:40 compute-2 sudo[266992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:40 compute-2 sudo[266992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:40 compute-2 sudo[266992]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:40 compute-2 sudo[267017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 14:59:40 compute-2 sudo[267017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:40 compute-2 sudo[267017]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:40.549+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:40 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:41 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 14:59:41 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:41.508+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:41 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:42 compute-2 ceph-mon[77081]: pgmap v2767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:42 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:42.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:42 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:42.492+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:43 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:43.523+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:43 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:44 compute-2 ceph-mon[77081]: pgmap v2768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:44 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:44.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:44.571+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:44 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:45 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:45 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:45.615+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:45 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:46 compute-2 ceph-mon[77081]: pgmap v2769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:46 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 14:59:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:46.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 14:59:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:46.568+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:46 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:47 compute-2 podman[267046]: 2026-01-22 14:59:47.036298275 +0000 UTC m=+0.089224248 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 14:59:47 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:59:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 14:59:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:59:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 14:59:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 14:59:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 14:59:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:47.606+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:47 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:48 compute-2 ceph-mon[77081]: pgmap v2770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:48 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:48.617+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:48 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:49 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:49 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:49.642+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:50 compute-2 ceph-mon[77081]: pgmap v2771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:50 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:50 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:50 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:50.645+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:51 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:51 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:51.616+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:52 compute-2 ceph-mon[77081]: pgmap v2772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:52 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:52 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:52.577+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:52.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:53 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:53.605+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:53 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:54 compute-2 ceph-mon[77081]: pgmap v2773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:54 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:54.632+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:54 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:54.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:55 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:55 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 14:59:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:55.586+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:55 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 14:59:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:56.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 14:59:56 compute-2 ceph-mon[77081]: pgmap v2774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:56 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:56 compute-2 sudo[267071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:56 compute-2 sudo[267071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:56 compute-2 sudo[267071]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:56 compute-2 sudo[267096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 14:59:56 compute-2 sudo[267096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 14:59:56 compute-2 sudo[267096]: pam_unix(sudo:session): session closed for user root
Jan 22 14:59:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:56.582+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:56 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:56.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:57 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:57.613+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:57 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:58 compute-2 ceph-mon[77081]: pgmap v2775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 14:59:58 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:58.637+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:58 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 14:59:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 14:59:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:58.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 14:59:59 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 14:59:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 14:59:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:59.685+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:59 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 14:59:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:00 compute-2 ceph-mon[77081]: pgmap v2776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:00 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 15:00:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 15:00:00 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:00.654+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:00 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:00.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:01 compute-2 podman[267124]: 2026-01-22 15:00:01.055475765 +0000 UTC m=+0.112469426 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:00:01 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:01.671+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:01 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:02.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:02 compute-2 ceph-mon[77081]: pgmap v2777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:02 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:02.675+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:02 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:02.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:03 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:03.632+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:03 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:04.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:04 compute-2 ceph-mon[77081]: pgmap v2778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:04 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:04.650+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:04 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:05.647+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:05 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:05 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:05 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:06.616+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:06 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:06 compute-2 ceph-mon[77081]: pgmap v2779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:06 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:06.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:07.615+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:07 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:07 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:08.651+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:08 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:08 compute-2 ceph-mon[77081]: pgmap v2780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:08 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:08.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:09.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:09 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:09 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:10.662+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:10 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:10 compute-2 ceph-mon[77081]: pgmap v2781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:10 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:10 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:10.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:11.655+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:11 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:11 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:12.691+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:12 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:12.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:12 compute-2 ceph-mon[77081]: pgmap v2782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:12 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:13.696+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:13 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:13 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:00:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:14.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:14.739+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:14 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:14.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:14 compute-2 ceph-mon[77081]: pgmap v2783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:14 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:15.708+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:15 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:16 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:16 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 5003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:16 compute-2 ceph-mon[77081]: pgmap v2784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:16 compute-2 sudo[267157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:16 compute-2 sudo[267157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:16 compute-2 sudo[267157]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:16.687+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:16 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:16 compute-2 sudo[267182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:16 compute-2 sudo[267182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:16 compute-2 sudo[267182]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:16.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:17 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:17 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:17 compute-2 sshd-session[267208]: Invalid user admin from 45.148.10.240 port 44166
Jan 22 15:00:17 compute-2 podman[267210]: 2026-01-22 15:00:17.533159601 +0000 UTC m=+0.082614803 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 15:00:17 compute-2 sshd-session[267208]: Connection closed by invalid user admin 45.148.10.240 port 44166 [preauth]
Jan 22 15:00:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:17.682+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:17 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:18 compute-2 ceph-mon[77081]: pgmap v2785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:18 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:00:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:00:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:18.661+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:18 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:19 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:19.685+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:19 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:20 compute-2 ceph-mon[77081]: pgmap v2786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:20 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:20 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:20.707+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:20 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:21 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:21.729+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:21 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:22 compute-2 ceph-mon[77081]: pgmap v2787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:22 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:22.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:22.734+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:22 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:23 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:23.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:23 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:24 compute-2 ceph-mon[77081]: pgmap v2788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:24 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:24.659+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:24 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:25 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:25 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:25.613+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:25 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:26 compute-2 ceph-mon[77081]: pgmap v2789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:26 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:26.606+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:26 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:27 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1400099288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1400099288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:27 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:00:27 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:00:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:27.591+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:27 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:28 compute-2 ceph-mon[77081]: pgmap v2790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:28 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:28.557+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:28 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:29.527+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:29 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:29 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:30.495+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:30 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:30 compute-2 ceph-mon[77081]: pgmap v2791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 22 15:00:30 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:30 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:31.526+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:31 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:31 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:32 compute-2 podman[267237]: 2026-01-22 15:00:32.060726107 +0000 UTC m=+0.113018573 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 15:00:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:32.493+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:32 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:32.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:32 compute-2 ceph-mon[77081]: pgmap v2792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Jan 22 15:00:32 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:33.502+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:33 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:33 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:33 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:00:33.906 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:00:33 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:00:33.907 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:00:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:34.508+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:34 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:34 compute-2 ceph-mon[77081]: pgmap v2793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 14 op/s
Jan 22 15:00:34 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:34.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:35.492+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:35 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:35 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:35 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:36.510+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:36 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:00:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:00:36 compute-2 sudo[267266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:36 compute-2 sudo[267266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:36 compute-2 sudo[267266]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:36 compute-2 ceph-mon[77081]: pgmap v2794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Jan 22 15:00:36 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:00:36 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3575390744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:00:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:36.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:36 compute-2 sudo[267292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:36 compute-2 sudo[267292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:36 compute-2 sudo[267292]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:37.510+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:37 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:37 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:38.501+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:38 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:38 compute-2 ceph-mon[77081]: pgmap v2795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 938 B/s wr, 23 op/s
Jan 22 15:00:38 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:38.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:39.462+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:39 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:39 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:40.481+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:40 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:40 compute-2 sudo[267318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:40 compute-2 sudo[267318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-2 sudo[267318]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:40 compute-2 sudo[267343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:00:40 compute-2 sudo[267343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-2 sudo[267343]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:40 compute-2 sudo[267368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:40 compute-2 sudo[267368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-2 sudo[267368]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:40 compute-2 sudo[267393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:00:40 compute-2 sudo[267393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:40 compute-2 ceph-mon[77081]: pgmap v2796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 22 15:00:40 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:40 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:41 compute-2 sudo[267393]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:41.518+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:41 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:41 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:41 compute-2 ceph-mon[77081]: pgmap v2797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 767 B/s wr, 25 op/s
Jan 22 15:00:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:00:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:00:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:00:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:00:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:00:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:00:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:42.519+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:42 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:42 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:43.527+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:43 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:43 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:00:43.909 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:00:43 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 15:00:43 compute-2 ceph-mon[77081]: pgmap v2798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 15:00:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:44.543+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:44 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:44.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:44 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:45.555+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:45 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:45 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:45 compute-2 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:45 compute-2 ceph-mon[77081]: pgmap v2799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Jan 22 15:00:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:46.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:46.543+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:46 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:46.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:47 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:00:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:00:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:00:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:00:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:00:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:00:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:47.567+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:47 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:47 compute-2 podman[267452]: 2026-01-22 15:00:47.995972212 +0000 UTC m=+0.054783228 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 15:00:48 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:48 compute-2 ceph-mon[77081]: pgmap v2800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:00:48 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:00:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:00:48 compute-2 sudo[267471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:48 compute-2 sudo[267471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:48 compute-2 sudo[267471]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:48 compute-2 sudo[267496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:00:48 compute-2 sudo[267496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:48 compute-2 sudo[267496]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:48.574+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:48 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:49 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:49.587+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:49 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:50 compute-2 ceph-mon[77081]: pgmap v2801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 3.7 KiB/s rd, 255 B/s wr, 5 op/s
Jan 22 15:00:50 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:50 compute-2 ceph-mon[77081]: Health check update: 92 slow ops, oldest one blocked for 5038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:50.622+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:50 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:50.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:51 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:51.660+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:51 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:52 compute-2 ceph-mon[77081]: pgmap v2802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 3 op/s
Jan 22 15:00:52 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:52.612+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:52 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:00:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:00:53 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:53.635+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:53 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:54 compute-2 ceph-mon[77081]: pgmap v2803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:54 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:54.681+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:54 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:55 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:55 compute-2 ceph-mon[77081]: Health check update: 92 slow ops, oldest one blocked for 5043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:00:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:55.718+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:55 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:56 compute-2 ceph-mon[77081]: pgmap v2804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:56 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:56.678+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:56 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:56 compute-2 sudo[267526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:56 compute-2 sudo[267526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:56 compute-2 sudo[267526]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:57 compute-2 sudo[267551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:00:57 compute-2 sudo[267551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:00:57 compute-2 sudo[267551]: pam_unix(sudo:session): session closed for user root
Jan 22 15:00:57 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:57.683+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:57 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:58 compute-2 ceph-mon[77081]: pgmap v2805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:00:58 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:58.650+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:58 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:00:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:00:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:00:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:00:59 compute-2 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 15:00:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:59.620+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:59 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:00:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:00.585+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:00 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:00 compute-2 ceph-mon[77081]: pgmap v2806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:00 compute-2 ceph-mon[77081]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:00 compute-2 ceph-mon[77081]: Health check update: 92 slow ops, oldest one blocked for 5048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:00.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:01.555+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:01 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:01 compute-2 ceph-mon[77081]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:01 compute-2 CROND[267579]: (root) CMD (run-parts /etc/cron.hourly)
Jan 22 15:01:01 compute-2 run-parts[267582]: (/etc/cron.hourly) starting 0anacron
Jan 22 15:01:01 compute-2 run-parts[267588]: (/etc/cron.hourly) finished 0anacron
Jan 22 15:01:01 compute-2 CROND[267578]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 22 15:01:02 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:02.520+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:02 compute-2 ceph-mon[77081]: pgmap v2807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:02 compute-2 ceph-mon[77081]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:02.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:03 compute-2 podman[267590]: 2026-01-22 15:01:03.080198414 +0000 UTC m=+0.133898822 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:01:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:03.476+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:03 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:03 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:04.454+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:04 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:04 compute-2 ceph-mon[77081]: pgmap v2808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:04 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:05.503+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:05 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:05 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:05 compute-2 ceph-mon[77081]: Health check update: 88 slow ops, oldest one blocked for 5053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:06.495+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:06 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:06 compute-2 ceph-mon[77081]: pgmap v2809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:06 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:06.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:07.500+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:07 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:07 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:08.489+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:08 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:08 compute-2 ceph-mon[77081]: pgmap v2810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:08 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:09.493+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:09 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:09 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:09 compute-2 ceph-mon[77081]: pgmap v2811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:10.457+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:10 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:10.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:10 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:10 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5057 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:11.440+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:11 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:11 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:11 compute-2 ceph-mon[77081]: pgmap v2812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:12.440+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:12 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:13 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:13.423+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:13 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:14 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:14 compute-2 ceph-mon[77081]: pgmap v2813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:14.383+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:14 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:14.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:15 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:15.421+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:15 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:16 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:16 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:16 compute-2 ceph-mon[77081]: pgmap v2814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:16 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:16.447+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:16 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:17 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:17 compute-2 sudo[267624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:17 compute-2 sudo[267624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:17 compute-2 sudo[267624]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:17 compute-2 sudo[267649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:17 compute-2 sudo[267649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:17 compute-2 sudo[267649]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:17.470+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:17 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:18 compute-2 ceph-mon[77081]: pgmap v2815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:18 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:18.474+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:18 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:19 compute-2 podman[267675]: 2026-01-22 15:01:19.035356873 +0000 UTC m=+0.081382352 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:01:19 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:01:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3039925830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:01:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:19.425+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:19 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:20 compute-2 ceph-mon[77081]: pgmap v2816: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 22 15:01:20 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:20 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5067 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:20.410+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:20 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:20.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:21 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:21.370+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:21 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:22 compute-2 ceph-mon[77081]: pgmap v2817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:01:22 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:22.366+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:22 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:23 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:23.377+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:23 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:24 compute-2 ceph-mon[77081]: pgmap v2818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:01:24 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:24.399+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:24 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:24.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:25 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:25 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5072 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:25.362+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:25 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:26 compute-2 ceph-mon[77081]: pgmap v2819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:01:26 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:26.394+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:26 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:26.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:27 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:27.382+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:27 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:28 compute-2 ceph-mon[77081]: pgmap v2820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 22 KiB/s wr, 7 op/s
Jan 22 15:01:28 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:28.361+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:28 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:28.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:29 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:29.408+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:29 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:30.431+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:30 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:30 compute-2 ceph-mon[77081]: pgmap v2821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 22 KiB/s wr, 9 op/s
Jan 22 15:01:30 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:30 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5077 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:31.433+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:31 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:31 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:32.408+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:32 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:32 compute-2 ceph-mon[77081]: pgmap v2822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 18 op/s
Jan 22 15:01:32 compute-2 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 15:01:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:33.440+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:33 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:33 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:34 compute-2 podman[267702]: 2026-01-22 15:01:34.070214876 +0000 UTC m=+0.129420628 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 15:01:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:34.487+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:34 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:34.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:35 compute-2 ceph-mon[77081]: pgmap v2823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:01:35 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:35.450+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:35 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:36 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:36 compute-2 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5082 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:36 compute-2 ceph-mon[77081]: pgmap v2824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:01:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:36.406+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:36 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:37 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:37 compute-2 sudo[267730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:37 compute-2 sudo[267730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:37 compute-2 sudo[267730]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:37 compute-2 sudo[267755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:37 compute-2 sudo[267755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:37 compute-2 sudo[267755]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:37.446+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:37 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:38 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:38 compute-2 ceph-mon[77081]: pgmap v2825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:01:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:38.479+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:38 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:39.473+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:39 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:39 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:39 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:40.476+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:40 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:40.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:40.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:41 compute-2 ceph-mon[77081]: pgmap v2826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 597 B/s wr, 12 op/s
Jan 22 15:01:41 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:41 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:41.469+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:41 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:42 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:42 compute-2 ceph-mon[77081]: pgmap v2827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 341 B/s wr, 10 op/s
Jan 22 15:01:42 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:42.507+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:42 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:42.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:43 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:43.485+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:43 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:44 compute-2 ceph-mon[77081]: pgmap v2828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:44 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:44.513+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:44 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:44.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:45 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:45 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5092 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:45 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:45.535+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:46 compute-2 ceph-mon[77081]: pgmap v2829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:46 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:46 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:46.557+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:46.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:47 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:01:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:01:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:01:47.239 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:01:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:01:47.239 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:01:47 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:47.600+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:48 compute-2 ceph-mon[77081]: pgmap v2830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:48 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:48 compute-2 sudo[267785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:48 compute-2 sudo[267785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-2 sudo[267785]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:48 compute-2 sudo[267810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:01:48 compute-2 sudo[267810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-2 sudo[267810]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:48 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:48.623+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:48 compute-2 sudo[267835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:48 compute-2 sudo[267835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-2 sudo[267835]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:48 compute-2 sudo[267860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:01:48 compute-2 sudo[267860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:48.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:49 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:49 compute-2 sudo[267860]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:49.618+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:49 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:49 compute-2 podman[267917]: 2026-01-22 15:01:49.98672848 +0000 UTC m=+0.052066459 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:01:50 compute-2 ceph-mon[77081]: pgmap v2831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:01:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:01:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:01:50 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:01:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:01:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:01:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:01:50 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:01:50.436 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:01:50 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:01:50.437 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:01:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:50.593+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:50 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:50.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:51 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:51.589+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:51 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:52 compute-2 ceph-mon[77081]: pgmap v2832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:01:52 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:52 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:01:52.438 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:01:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:52.575+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:52 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:53 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:53.623+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:53 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:54 compute-2 ceph-mon[77081]: pgmap v2833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:01:54 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:54.651+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:54 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:01:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:01:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:54.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:55 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:55 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:01:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:55.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:55 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:55 compute-2 sudo[267940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:55 compute-2 sudo[267940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:55 compute-2 sudo[267940]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:56 compute-2 sudo[267965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:01:56 compute-2 sudo[267965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:56 compute-2 sudo[267965]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:56 compute-2 ceph-mon[77081]: pgmap v2834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 426 B/s wr, 1 op/s
Jan 22 15:01:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:01:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:01:56 compute-2 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 15:01:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:56.611+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:56 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:56.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:57 compute-2 sudo[267991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:57 compute-2 sudo[267991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:57 compute-2 sudo[267991]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:57 compute-2 sudo[268016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:01:57 compute-2 sudo[268016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:01:57 compute-2 sudo[268016]: pam_unix(sudo:session): session closed for user root
Jan 22 15:01:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:57.578+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:57 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:57 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:58.589+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:58 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:58 compute-2 ceph-mon[77081]: pgmap v2835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 15:01:58 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:01:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:01:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:01:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:01:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:01:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:59.605+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:59 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:01:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:01:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:01:59 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:00.588+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:00 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:00 compute-2 ceph-mon[77081]: pgmap v2836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 15:02:00 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:00 compute-2 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:00.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:00.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:01.541+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:01 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:01 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.908775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121908862, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2454, "num_deletes": 251, "total_data_size": 4783821, "memory_usage": 4875248, "flush_reason": "Manual Compaction"}
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121947764, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 3108278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83725, "largest_seqno": 86174, "table_properties": {"data_size": 3099212, "index_size": 5239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23322, "raw_average_key_size": 21, "raw_value_size": 3079129, "raw_average_value_size": 2822, "num_data_blocks": 225, "num_entries": 1091, "num_filter_entries": 1091, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093958, "oldest_key_time": 1769093958, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 39043 microseconds, and 7998 cpu microseconds.
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.947831) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 3108278 bytes OK
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.947851) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.958601) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.958624) EVENT_LOG_v1 {"time_micros": 1769094121958618, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.958644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4772716, prev total WAL file size 4772716, number of live WAL files 2.
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.959986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(3035KB)], [174(8966KB)]
Jan 22 15:02:01 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121960061, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 12289802, "oldest_snapshot_seqno": -1}
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 13401 keys, 10597195 bytes, temperature: kUnknown
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094122053099, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 10597195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10525771, "index_size": 36815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 368994, "raw_average_key_size": 27, "raw_value_size": 10299091, "raw_average_value_size": 768, "num_data_blocks": 1325, "num_entries": 13401, "num_filter_entries": 13401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.053414) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10597195 bytes
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.059191) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.0 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 13918, records dropped: 517 output_compression: NoCompression
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.059210) EVENT_LOG_v1 {"time_micros": 1769094122059201, "job": 112, "event": "compaction_finished", "compaction_time_micros": 93102, "compaction_time_cpu_micros": 33675, "output_level": 6, "num_output_files": 1, "total_output_size": 10597195, "num_input_records": 13918, "num_output_records": 13401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094122059798, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094122061531, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.959871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:02:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:02.562+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:02 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:02.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:02 compute-2 ceph-mon[77081]: pgmap v2837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 511 B/s wr, 1 op/s
Jan 22 15:02:02 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:03.585+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:03 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:04 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:04.566+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:04 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:04.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:04.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:05 compute-2 podman[268045]: 2026-01-22 15:02:05.035796893 +0000 UTC m=+0.095672723 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 15:02:05 compute-2 ceph-mon[77081]: pgmap v2838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 426 B/s wr, 1 op/s
Jan 22 15:02:05 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:05.560+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:05 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:06 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:06 compute-2 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:06 compute-2 ceph-mon[77081]: pgmap v2839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 426 B/s wr, 1 op/s
Jan 22 15:02:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:06.558+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:06 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:06.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:07 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:07 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:07.588+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:07 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:08 compute-2 ceph-mon[77081]: pgmap v2840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:02:08 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:08.598+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:08 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:08.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:09.581+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:09 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:09 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:10.614+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:10 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:10.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:11 compute-2 ceph-mon[77081]: pgmap v2841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:11 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:11 compute-2 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:11.592+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:11 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:12 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:12 compute-2 ceph-mon[77081]: pgmap v2842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:12.615+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:12 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:12.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:02:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:02:13 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:13 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:13.654+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:13 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:14.606+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:14 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:14 compute-2 ceph-mon[77081]: pgmap v2843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:14 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:15 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:15.599+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:16 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:16 compute-2 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:16 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:16.624+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:17 compute-2 ceph-mon[77081]: pgmap v2844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:17 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:17 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:17.634+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:17 compute-2 sudo[268077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:17 compute-2 sudo[268077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:17 compute-2 sudo[268077]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:17 compute-2 sudo[268102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:17 compute-2 sudo[268102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:17 compute-2 sudo[268102]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:18 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:18 compute-2 ceph-mon[77081]: pgmap v2845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:18.610+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:18 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:19.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:19 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/68083093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:02:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/68083093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:02:19 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:19 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:19.614+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:20 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:20.603+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:20.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:20 compute-2 ceph-mon[77081]: pgmap v2846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:20 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:20 compute-2 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:20 compute-2 podman[268129]: 2026-01-22 15:02:20.985180239 +0000 UTC m=+0.050769397 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 15:02:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:21 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:21.579+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:22 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:22 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:22.596+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:22.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:23 compute-2 ceph-mon[77081]: pgmap v2847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:23 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:23 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:23 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:23.638+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:24 compute-2 ceph-mon[77081]: pgmap v2848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:24 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:24 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:24.645+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:24.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:25.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:25 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:25 compute-2 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:25 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:25.641+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:26 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:26.666+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:26 compute-2 ceph-mon[77081]: pgmap v2849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:26 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:26.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:27 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:27.654+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:28 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:28.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:28.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:28 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:29.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:29 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:29.701+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:29 compute-2 ceph-mon[77081]: pgmap v2850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:29 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:29 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:30 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:30.740+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:30.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:30 compute-2 ceph-mon[77081]: pgmap v2851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:30 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:30 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:31.704+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 28 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:31 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 28 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:02:31 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:32.684+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:32 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:32.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:33 compute-2 ceph-mon[77081]: pgmap v2852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:33 compute-2 ceph-mon[77081]: 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:02:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:33.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:33 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:34 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:34 compute-2 ceph-mon[77081]: pgmap v2853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:02:34 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:34.715+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:34 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:35.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:35 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:35 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:35.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:35 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:36 compute-2 podman[268155]: 2026-01-22 15:02:36.012387481 +0000 UTC m=+0.078609825 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:02:36 compute-2 ceph-mon[77081]: pgmap v2854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 15:02:36 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:36.681+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:36 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:37 compute-2 sshd-session[268182]: Invalid user admin from 45.148.10.240 port 52890
Jan 22 15:02:37 compute-2 sshd-session[268182]: Connection closed by invalid user admin 45.148.10.240 port 52890 [preauth]
Jan 22 15:02:37 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:37.658+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:37 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:37 compute-2 sudo[268185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:37 compute-2 sudo[268185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:37 compute-2 sudo[268185]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:37 compute-2 sudo[268210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:37 compute-2 sudo[268210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:37 compute-2 sudo[268210]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:38 compute-2 ceph-mon[77081]: pgmap v2855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 22 15:02:38 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:38.636+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:38 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:39 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:39.613+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 87 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:39 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 87 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:40.648+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 34 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:40 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 34 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:02:40 compute-2 ceph-mon[77081]: pgmap v2856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 22 15:02:40 compute-2 ceph-mon[77081]: 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:02:40 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:41.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:41.645+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 89 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:41 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 89 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:02:42 compute-2 ceph-mon[77081]: 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:02:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:42.693+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:42 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:43.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:43 compute-2 ceph-mon[77081]: pgmap v2857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 22 op/s
Jan 22 15:02:43 compute-2 ceph-mon[77081]: 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:02:43 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:43.658+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:43 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:44.625+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:44 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:44 compute-2 ceph-mon[77081]: pgmap v2858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 722 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 596 B/s wr, 22 op/s
Jan 22 15:02:44 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:45.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:45.638+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:45 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:45 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:45 compute-2 ceph-mon[77081]: Health check update: 87 slow ops, oldest one blocked for 5153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:46.662+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:46 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 15:02:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 15:02:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:47.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:47 compute-2 ceph-mon[77081]: pgmap v2859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 740 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 874 KiB/s wr, 26 op/s
Jan 22 15:02:47 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4231267640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 15:02:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:02:47.239 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:02:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:02:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:02:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:02:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:02:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:47.675+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:47 compute-2 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 22 15:02:48 compute-2 ceph-osd[79779]: osd.2 170 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:48.674+0000 7f47f8ed4640 -1 osd.2 170 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 22 15:02:49 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:49 compute-2 ceph-mon[77081]: pgmap v2860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 15:02:49 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:49 compute-2 ceph-mon[77081]: osdmap e170: 3 total, 3 up, 3 in
Jan 22 15:02:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:49.685+0000 7f47f8ed4640 -1 osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:49 compute-2 ceph-osd[79779]: osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:50.637+0000 7f47f8ed4640 -1 osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:50 compute-2 ceph-osd[79779]: osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:51.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:51 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:51 compute-2 ceph-mon[77081]: osdmap e171: 3 total, 3 up, 3 in
Jan 22 15:02:51 compute-2 ceph-mon[77081]: pgmap v2863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 2.7 MiB/s wr, 28 op/s
Jan 22 15:02:51 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:51 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 22 15:02:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:51.633+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:51 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 15:02:51 compute-2 podman[268242]: 2026-01-22 15:02:51.992823839 +0000 UTC m=+0.051090775 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 15:02:52 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:02:52 compute-2 ceph-mon[77081]: pgmap v2864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.7 MiB/s wr, 33 op/s
Jan 22 15:02:52 compute-2 ceph-mon[77081]: osdmap e172: 3 total, 3 up, 3 in
Jan 22 15:02:52 compute-2 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 15:02:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:52.666+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:52 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:02:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:02:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:53.701+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:53 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:54 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:54.723+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:54 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:54 compute-2 ceph-mon[77081]: pgmap v2866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 768 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 852 B/s wr, 7 op/s
Jan 22 15:02:54 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:55.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:02:55.270 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:02:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:02:55.270 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:02:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:02:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:55.756+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:55 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:56 compute-2 sudo[268262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:56 compute-2 sudo[268262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-2 sudo[268262]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:56 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:02:56 compute-2 sudo[268287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:02:56 compute-2 sudo[268287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-2 sudo[268287]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-2 sudo[268312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:56 compute-2 sudo[268312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-2 sudo[268312]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-2 sudo[268337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:02:56 compute-2 sudo[268337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:56.719+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:56 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:56 compute-2 sudo[268337]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:57.761+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:57 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:57 compute-2 sudo[268394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:57 compute-2 sudo[268394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:57 compute-2 sudo[268394]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:58 compute-2 sudo[268419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:02:58 compute-2 sudo[268419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:02:58 compute-2 sudo[268419]: pam_unix(sudo:session): session closed for user root
Jan 22 15:02:58 compute-2 ceph-mon[77081]: pgmap v2867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 782 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 678 KiB/s wr, 10 op/s
Jan 22 15:02:58 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:02:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:58.811+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:58 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:58 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:58 compute-2 ceph-mon[77081]: pgmap v2868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:02:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:02:58 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:02:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:02:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:59.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:02:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:59.804+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:59 compute-2 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:02:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:02:59 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 22 15:03:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:00.834+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:00 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:00 compute-2 ceph-mon[77081]: pgmap v2869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 26 op/s
Jan 22 15:03:00 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:00 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:00 compute-2 ceph-mon[77081]: osdmap e173: 3 total, 3 up, 3 in
Jan 22 15:03:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:00.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:01 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:01.271 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:03:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:01.810+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:01 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:01 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:02.829+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:02 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:02 compute-2 ceph-mon[77081]: pgmap v2871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 22 op/s
Jan 22 15:03:02 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:03:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:03.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:03:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:03.789+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:03 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:03 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:03:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:03:04 compute-2 sudo[268447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:04 compute-2 sudo[268447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:04 compute-2 sudo[268447]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:04 compute-2 sudo[268472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:03:04 compute-2 sudo[268472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:04 compute-2 sudo[268472]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:04.741+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:04 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:04 compute-2 ceph-mon[77081]: pgmap v2872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 21 op/s
Jan 22 15:03:04 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:05.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:05.765+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:05 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:06 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:06 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:06.725+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:06 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:07.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:07 compute-2 podman[268499]: 2026-01-22 15:03:07.075327851 +0000 UTC m=+0.124508871 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 15:03:07 compute-2 ceph-mon[77081]: pgmap v2873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 22 15:03:07 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:07 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:07.687+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:07 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:08 compute-2 ceph-mon[77081]: pgmap v2874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:08 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:08 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:08.733+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:09 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:09.729+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:09 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:10 compute-2 ceph-mon[77081]: pgmap v2875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:10 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:10 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:10.726+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:10 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:11 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:11.763+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:11 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:12 compute-2 ceph-mon[77081]: pgmap v2876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 185 B/s rd, 92 B/s wr, 0 op/s
Jan 22 15:03:12 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:12.765+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:12 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:13 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:13.763+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:13 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:14.729+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:14 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:14 compute-2 ceph-mon[77081]: pgmap v2877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 22 15:03:14 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:15.680+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:15 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:15 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:15 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:16.702+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:16 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:16 compute-2 ceph-mon[77081]: pgmap v2878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:16 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:16.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:17.670+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:17 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:17 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:18 compute-2 sudo[268531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:18 compute-2 sudo[268531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:18 compute-2 sudo[268531]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:18 compute-2 sudo[268556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:18 compute-2 sudo[268556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:18 compute-2 sudo[268556]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:03:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:03:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:03:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:03:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:18.659+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:18 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:18 compute-2 ceph-mon[77081]: pgmap v2879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:18 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:03:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:03:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:03:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:18.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:03:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:19.682+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:19 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:20 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:20.646+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:20 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 15:03:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:20.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 15:03:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:21 compute-2 ceph-mon[77081]: pgmap v2880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:21 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:21 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:21 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:21.666+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:21 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:22 compute-2 ceph-mon[77081]: pgmap v2881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 22 15:03:22 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:03:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:22.687+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:22 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:22 compute-2 podman[268584]: 2026-01-22 15:03:22.98764064 +0000 UTC m=+0.042200875 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:03:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:03:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:03:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:23.677+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:23 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:24 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:24 compute-2 ceph-mon[77081]: pgmap v2882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 15:03:24 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:24.647+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:24 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:25.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:25 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:25 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:25.671+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:25 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:26 compute-2 ceph-mon[77081]: pgmap v2883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 22 15:03:26 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:26.677+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:26 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:03:27 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:03:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:03:27 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:03:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:27 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:03:27 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:03:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:27.663+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:27 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:28 compute-2 ceph-mon[77081]: pgmap v2884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:28 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:28.644+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:28 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:28.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:29 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:29.654+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:29 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.088735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210088782, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1382, "num_deletes": 257, "total_data_size": 2543602, "memory_usage": 2590024, "flush_reason": "Manual Compaction"}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210106611, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1671345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86179, "largest_seqno": 87556, "table_properties": {"data_size": 1665616, "index_size": 2868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14497, "raw_average_key_size": 20, "raw_value_size": 1653115, "raw_average_value_size": 2361, "num_data_blocks": 124, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094122, "oldest_key_time": 1769094122, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 17948 microseconds, and 7796 cpu microseconds.
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106683) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1671345 bytes OK
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106712) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108511) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108536) EVENT_LOG_v1 {"time_micros": 1769094210108529, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2536863, prev total WAL file size 2536863, number of live WAL files 2.
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.109872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303232' seq:72057594037927935, type:22 .. '6C6F676D0034323735' seq:0, type:0; will stop at (end)
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1632KB)], [177(10MB)]
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210109967, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 12268540, "oldest_snapshot_seqno": -1}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 13570 keys, 12122036 bytes, temperature: kUnknown
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210223873, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 12122036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12047922, "index_size": 39057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33989, "raw_key_size": 374170, "raw_average_key_size": 27, "raw_value_size": 11816645, "raw_average_value_size": 870, "num_data_blocks": 1415, "num_entries": 13570, "num_filter_entries": 13570, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.224300) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12122036 bytes
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.226217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.6 rd, 106.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.3) OK, records in: 14101, records dropped: 531 output_compression: NoCompression
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.226251) EVENT_LOG_v1 {"time_micros": 1769094210226235, "job": 114, "event": "compaction_finished", "compaction_time_micros": 113998, "compaction_time_cpu_micros": 58778, "output_level": 6, "num_output_files": 1, "total_output_size": 12122036, "num_input_records": 14101, "num_output_records": 13570, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210226981, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210230588, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.109730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:03:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:30 compute-2 ceph-mon[77081]: pgmap v2885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 11 op/s
Jan 22 15:03:30 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:30 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:30.654+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:30 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:31.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:31 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:31.631+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:31 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:32 compute-2 ceph-mon[77081]: pgmap v2886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:32 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:32.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:32 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:33 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:33.239 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:03:33 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:33.241 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:03:33 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:33.690+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:33 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:34 compute-2 ceph-mon[77081]: pgmap v2887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:34 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:34.691+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:34 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:34.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:35 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:35 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:35.717+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:35 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:36.730+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:36 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:36 compute-2 ceph-mon[77081]: pgmap v2888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:36 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:03:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:03:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:37.741+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:37 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:37 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:38 compute-2 podman[268611]: 2026-01-22 15:03:38.063779648 +0000 UTC m=+0.121479416 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:03:38 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:38.243 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:03:38 compute-2 sudo[268638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:38 compute-2 sudo[268638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:38 compute-2 sudo[268638]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:38 compute-2 sudo[268663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:38 compute-2 sudo[268663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:38 compute-2 sudo[268663]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:38.709+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:38 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:38 compute-2 ceph-mon[77081]: pgmap v2889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:38 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:39.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:39 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:39 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:40.691+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:40 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:40 compute-2 ceph-mon[77081]: pgmap v2890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:03:40 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:40 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:41.699+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:41 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:41 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:42.655+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:42 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:42 compute-2 ceph-mon[77081]: pgmap v2891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 22 15:03:42 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:43.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:43.631+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:43 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:43 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:44.679+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:44 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:44 compute-2 ceph-mon[77081]: pgmap v2892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:44 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:45.717+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:45 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:45 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:45 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:46.689+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:46 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:46 compute-2 ceph-mon[77081]: pgmap v2893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:46 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:03:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:03:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:03:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:03:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:47.732+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:47 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:47 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:48.728+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:48 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:48 compute-2 ceph-mon[77081]: pgmap v2894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:48 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:49.774+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:49 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:50 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:50.769+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:50 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:51 compute-2 ceph-mon[77081]: pgmap v2895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:51 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:51 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:51 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:51.764+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:51 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:52 compute-2 ceph-mon[77081]: pgmap v2896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:52 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:52.747+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:52 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:53 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:53.782+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:53 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:54 compute-2 podman[268696]: 2026-01-22 15:03:54.04841778 +0000 UTC m=+0.097320818 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 15:03:54 compute-2 ceph-mon[77081]: pgmap v2897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:54 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:54.770+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:54 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:55 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:55 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:03:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:03:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:55.721+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:55 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:56 compute-2 ceph-mon[77081]: pgmap v2898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:56 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:56.752+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:56 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:57 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:57.707+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:57 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:58 compute-2 ceph-mon[77081]: pgmap v2899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:03:58 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:58 compute-2 sudo[268718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:58 compute-2 sudo[268718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:58 compute-2 sudo[268718]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:58 compute-2 sudo[268743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:03:58 compute-2 sudo[268743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:03:58 compute-2 sudo[268743]: pam_unix(sudo:session): session closed for user root
Jan 22 15:03:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:58.664+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:58 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:03:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:03:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:03:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:03:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:03:59 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:03:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:59.714+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:59 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:03:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:00 compute-2 ceph-mon[77081]: pgmap v2900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:00 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:00 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:00.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:00 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:01.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:04:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:04:01 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:01.658+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:01 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:02 compute-2 ceph-mon[77081]: pgmap v2901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:02 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:02.662+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:02 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:03.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:03 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:03.700+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:03 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:04 compute-2 ceph-mon[77081]: pgmap v2902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:04 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:04 compute-2 sudo[268771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:04 compute-2 sudo[268771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-2 sudo[268771]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:04 compute-2 sudo[268796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:04:04 compute-2 sudo[268796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-2 sudo[268796]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:04 compute-2 sudo[268821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:04 compute-2 sudo[268821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-2 sudo[268821]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:04 compute-2 sudo[268846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:04:04 compute-2 sudo[268846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:04.731+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:04 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:05 compute-2 sudo[268846]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:05 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:05 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:05.769+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:05 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:06 compute-2 ceph-mon[77081]: pgmap v2903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:04:06 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:04:06 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:06.740+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:06 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:07 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:07.734+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:07 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:08 compute-2 ceph-mon[77081]: pgmap v2904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:08 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:08.710+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:08 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:09 compute-2 podman[268906]: 2026-01-22 15:04:09.015185702 +0000 UTC m=+0.080589414 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 15:04:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:09 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:09.725+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:09 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:10 compute-2 ceph-mon[77081]: pgmap v2905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:10 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:10 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:10.735+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:10 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:11.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:11 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:11.754+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:11 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:12 compute-2 sudo[268933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:12 compute-2 sudo[268933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:12 compute-2 sudo[268933]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:12 compute-2 ceph-mon[77081]: pgmap v2906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:12 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:04:12 compute-2 sudo[268958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:04:12 compute-2 sudo[268958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:12 compute-2 sudo[268958]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:12.737+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:12 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:13.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:13.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:13 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:13.728+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:13 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:14 compute-2 ceph-mon[77081]: pgmap v2907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:14 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:14.715+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:14 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 15:04:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 15:04:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:15 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:15 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:15.676+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:15 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:16 compute-2 ceph-mon[77081]: pgmap v2908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:16 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:16.704+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:16 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 15:04:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 15:04:17 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:17.715+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:17 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:18 compute-2 ceph-mon[77081]: pgmap v2909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:18 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3231129722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:04:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3231129722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:04:18 compute-2 sudo[268986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:18 compute-2 sudo[268986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:18 compute-2 sudo[268986]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:18.733+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:18 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:18 compute-2 sudo[269011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:18 compute-2 sudo[269011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:18 compute-2 sudo[269011]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:19 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:19.714+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:19 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:20 compute-2 ceph-mon[77081]: pgmap v2910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 170 B/s wr, 1 op/s
Jan 22 15:04:20 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:20 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:20.762+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:20 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:21 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:21.800+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:21 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:22 compute-2 ceph-mon[77081]: pgmap v2911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 22 15:04:22 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:22.813+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:22 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 15:04:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 15:04:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 15:04:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 15:04:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:23.832+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:23 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:23 compute-2 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 15:04:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:24.794+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:24 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:24 compute-2 ceph-mon[77081]: pgmap v2912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 801 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 22 15:04:24 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:25 compute-2 podman[269040]: 2026-01-22 15:04:25.025965068 +0000 UTC m=+0.060936848 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:04:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:25.807+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:25 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:25 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:25 compute-2 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:26.774+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:26 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:26 compute-2 ceph-mon[77081]: pgmap v2913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 814 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 528 KiB/s wr, 33 op/s
Jan 22 15:04:26 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:27.750+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:27 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:27 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:28.700+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:28 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:28 compute-2 ceph-mon[77081]: pgmap v2914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 22 15:04:28 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:29.678+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:29 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:29 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:29 compute-2 ceph-mon[77081]: pgmap v2915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 22 15:04:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:30.717+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:30 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:31.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:31 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:31 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:31.687+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:31 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:32 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:32 compute-2 ceph-mon[77081]: pgmap v2916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 22 15:04:32 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:32.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:32 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:04:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:33.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:04:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:33 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:33.643+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:33 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:34 compute-2 ceph-mon[77081]: pgmap v2917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 15:04:34 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:34.677+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:34 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:35.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:35 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:35 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:35.703+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:35 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:36 compute-2 ceph-mon[77081]: pgmap v2918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 22 15:04:36 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:36.693+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:36 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:37.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:37 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:37.686+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:37 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:38 compute-2 ceph-mon[77081]: pgmap v2919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 5.2 KiB/s rd, 1.3 MiB/s wr, 8 op/s
Jan 22 15:04:38 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:38.713+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:38 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:38 compute-2 sudo[269066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:38 compute-2 sudo[269066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:38 compute-2 sudo[269066]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:38 compute-2 sudo[269092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:38 compute-2 sudo[269092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:38 compute-2 sudo[269092]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:39.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:39 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:39.674+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:39 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:40 compute-2 podman[269117]: 2026-01-22 15:04:40.036996121 +0000 UTC m=+0.094485247 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 15:04:40 compute-2 ceph-mon[77081]: pgmap v2920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:40 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:40 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:40.693+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:40 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:41.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:41 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:41.700+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:41 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:42 compute-2 ceph-mon[77081]: pgmap v2921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:42 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:42.653+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:42 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:43.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:43 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:43.610+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:43 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:44 compute-2 ceph-mon[77081]: pgmap v2922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:44 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:44.595+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:44 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:45.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:45 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:45 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:45.624+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:45 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:46 compute-2 ceph-mon[77081]: pgmap v2923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:46 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:46.640+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:46 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:47.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:04:47.241 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:04:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:04:47.241 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:04:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:04:47.241 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:04:47 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:47.603+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:47 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:48 compute-2 ceph-mon[77081]: pgmap v2924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:48 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:48.571+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:48 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:49.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:49 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.512571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289512653, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1345, "num_deletes": 251, "total_data_size": 2360734, "memory_usage": 2388928, "flush_reason": "Manual Compaction"}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289522582, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1539104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87561, "largest_seqno": 88901, "table_properties": {"data_size": 1533722, "index_size": 2585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13910, "raw_average_key_size": 20, "raw_value_size": 1522007, "raw_average_value_size": 2275, "num_data_blocks": 111, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094210, "oldest_key_time": 1769094210, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 10068 microseconds, and 4805 cpu microseconds.
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.522651) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1539104 bytes OK
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.522670) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.524649) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.524666) EVENT_LOG_v1 {"time_micros": 1769094289524660, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.524686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2354236, prev total WAL file size 2354236, number of live WAL files 2.
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.525294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1503KB)], [180(11MB)]
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289525355, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 13661140, "oldest_snapshot_seqno": -1}
Jan 22 15:04:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:49.589+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:49 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 13722 keys, 11987527 bytes, temperature: kUnknown
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289615953, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11987527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11912712, "index_size": 39374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34373, "raw_key_size": 378510, "raw_average_key_size": 27, "raw_value_size": 11679140, "raw_average_value_size": 851, "num_data_blocks": 1424, "num_entries": 13722, "num_filter_entries": 13722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.616405) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11987527 bytes
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.617699) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.5 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 14239, records dropped: 517 output_compression: NoCompression
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.617721) EVENT_LOG_v1 {"time_micros": 1769094289617710, "job": 116, "event": "compaction_finished", "compaction_time_micros": 90753, "compaction_time_cpu_micros": 39280, "output_level": 6, "num_output_files": 1, "total_output_size": 11987527, "num_input_records": 14239, "num_output_records": 13722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289618205, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289621081, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.525230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:04:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:50 compute-2 ceph-mon[77081]: pgmap v2925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:50 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:50 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:50.590+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:50 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:04:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:04:51 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:51 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:51.576+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:52 compute-2 ceph-mon[77081]: pgmap v2926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:52 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:52.622+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:52 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:53 compute-2 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:04:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:53.639+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:53 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:54 compute-2 ceph-mon[77081]: pgmap v2927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:54 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:54.641+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:54 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:55.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:04:55 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:55 compute-2 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:04:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:55.683+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:55 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:56 compute-2 podman[269153]: 2026-01-22 15:04:56.020867914 +0000 UTC m=+0.069562587 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 15:04:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:56.711+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:56 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:56 compute-2 ceph-mon[77081]: pgmap v2928: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:56 compute-2 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:04:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:04:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:04:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:57.669+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:57 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:57 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:58.672+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:58 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:58 compute-2 ceph-mon[77081]: pgmap v2929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:04:58 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:59 compute-2 sudo[269176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:59 compute-2 sudo[269176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:59 compute-2 sudo[269176]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:04:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:59.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:04:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:04:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:04:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:04:59 compute-2 sudo[269201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:04:59 compute-2 sudo[269201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:04:59 compute-2 sudo[269201]: pam_unix(sudo:session): session closed for user root
Jan 22 15:04:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:59.705+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:59 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:04:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:04:59 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:00.733+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:00 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:00 compute-2 ceph-mon[77081]: pgmap v2930: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:00 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:00 compute-2 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:01.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:01.760+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:01 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:01 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:02.748+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:02 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:02 compute-2 ceph-mon[77081]: pgmap v2931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:02 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:03.766+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:03 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:03 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:04.786+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:04 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:04 compute-2 ceph-mon[77081]: pgmap v2932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:04 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:05.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:05.831+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:05 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:05 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:05 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:06.792+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:06 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:06 compute-2 ceph-mon[77081]: pgmap v2933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:06 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:07.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:07.833+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:07 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:07 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:08.812+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:08 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:08 compute-2 ceph-mon[77081]: pgmap v2934: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:08 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:09.793+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:09 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:09 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:10.759+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:10 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:10 compute-2 ceph-mon[77081]: pgmap v2935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:10 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:10 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:11 compute-2 podman[269232]: 2026-01-22 15:05:11.018257496 +0000 UTC m=+0.085297357 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:05:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:11.754+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:11 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:12 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:12 compute-2 ceph-mon[77081]: pgmap v2936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:12 compute-2 sudo[269259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:12 compute-2 sudo[269259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:12 compute-2 sudo[269259]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:12.710+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:12 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:12 compute-2 sudo[269284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:05:12 compute-2 sudo[269284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:12 compute-2 sudo[269284]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:12 compute-2 sudo[269309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:12 compute-2 sudo[269309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:12 compute-2 sudo[269309]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:12 compute-2 sudo[269335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:05:12 compute-2 sudo[269335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:13 compute-2 sudo[269335]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-2 sudo[269390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:13 compute-2 sudo[269390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-2 sudo[269390]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-2 sudo[269415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:05:13 compute-2 sudo[269415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-2 sudo[269415]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-2 sudo[269440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:13 compute-2 sudo[269440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-2 sudo[269440]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:13.679+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:13 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:13 compute-2 sudo[269465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 15:05:13 compute-2 sudo[269465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:13 compute-2 sudo[269465]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:14 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:14 compute-2 ceph-mon[77081]: pgmap v2937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:14.714+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:14 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:15.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:15 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:15 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:15 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:15.730+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:15 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:16 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:16 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:16 compute-2 ceph-mon[77081]: pgmap v2938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:16 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:16.778+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:16 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:17.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:17.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:17 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:05:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:05:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:17.791+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:17 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:18.835+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:18 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:19 compute-2 ceph-mon[77081]: pgmap v2939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:19 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:19.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:19.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:19 compute-2 sudo[269511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:19 compute-2 sudo[269511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:19 compute-2 sudo[269511]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:19 compute-2 sudo[269536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:19 compute-2 sudo[269536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:19 compute-2 sudo[269536]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:19.880+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:19 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:20 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:20.891+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:20 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:21.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:21 compute-2 ceph-mon[77081]: pgmap v2940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:21 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:21 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:21 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:21.918+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:21 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:22 compute-2 ceph-mon[77081]: pgmap v2941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:22 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:22.878+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:22 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:23.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:23 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:23.900+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:23 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:24 compute-2 sudo[269563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:24 compute-2 sudo[269563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:24 compute-2 sudo[269563]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:24 compute-2 sudo[269588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:05:24 compute-2 sudo[269588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:24 compute-2 sudo[269588]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:24 compute-2 ceph-mon[77081]: pgmap v2942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:05:24 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:24.927+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:24 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:25.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:25 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:25 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:25.900+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:25 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:26 compute-2 ceph-mon[77081]: pgmap v2943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:26 compute-2 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 15:05:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:26.923+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:26 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:26 compute-2 podman[269615]: 2026-01-22 15:05:26.987356594 +0000 UTC m=+0.052216173 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 15:05:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:27.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:27.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:27 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:27.875+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:27 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:28 compute-2 ceph-mon[77081]: pgmap v2944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:28 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:28.915+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:28 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:05:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 89K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1920 writes, 9811 keys, 1920 commit groups, 1.0 writes per commit group, ingest: 16.93 MB, 0.03 MB/s
                                           Interval WAL: 1920 writes, 1920 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     85.5      1.13              0.35        58    0.020       0      0       0.0       0.0
                                             L6      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.6    134.8    116.5      4.65              1.82        57    0.082    549K    30K       0.0       0.0
                                            Sum      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.6    108.5    110.4      5.79              2.18       115    0.050    549K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     95.0     96.5      0.86              0.32        14    0.061     95K   3607       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    134.8    116.5      4.65              1.82        57    0.082    549K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     85.8      1.13              0.35        57    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.094, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.62 GB write, 0.12 MB/s write, 0.61 GB read, 0.12 MB/s read, 5.8 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 67.51 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000434 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3555,64.10 MB,21.087%) FilterBlock(115,1.48 MB,0.487152%) IndexBlock(115,1.93 MB,0.633526%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:05:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:29 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:29.885+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:29 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:30 compute-2 ceph-mon[77081]: pgmap v2945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:30 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:30 compute-2 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:30.881+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:30 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:31 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:31.867+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:31 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:32.863+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:32 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:33 compute-2 ceph-mon[77081]: pgmap v2946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:33 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:05:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:05:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:33.882+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:33 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:34 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:34.921+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:34 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:35 compute-2 ceph-mon[77081]: pgmap v2947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:35 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:35 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:35 compute-2 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:35.963+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:35 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:36.979+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:36 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:37 compute-2 ceph-mon[77081]: pgmap v2948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:37 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:37.939+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:37 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:38 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:38 compute-2 ceph-mon[77081]: pgmap v2949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:38.918+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:38 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:39.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:39 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:39 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:39 compute-2 sudo[269641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:39 compute-2 sudo[269641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:39 compute-2 sudo[269641]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:39 compute-2 sudo[269666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:39 compute-2 sudo[269666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:39 compute-2 sudo[269666]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:39.926+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:39 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:40 compute-2 ceph-mon[77081]: pgmap v2950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:40 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:40 compute-2 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:40.938+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:40 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:41.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:41.953+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:41 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:42 compute-2 podman[269692]: 2026-01-22 15:05:42.081491501 +0000 UTC m=+0.129237623 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 15:05:42 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:42.995+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:42 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:43 compute-2 ceph-mon[77081]: pgmap v2951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:43.954+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:43 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:44 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:44 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:44 compute-2 ceph-mon[77081]: pgmap v2952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:44 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:44.920+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:44 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:45.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:45 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:45 compute-2 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:45.925+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:45 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:46 compute-2 ceph-mon[77081]: pgmap v2953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:46 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:46.958+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:46 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:05:47.242 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:05:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:05:47.242 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:05:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:05:47.242 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:05:47 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:47.956+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:47 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:48 compute-2 ceph-mon[77081]: pgmap v2954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:48 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:48.984+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:48 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:49 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:50.035+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:50 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:50 compute-2 ceph-mon[77081]: pgmap v2955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:50 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:50 compute-2 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:50.992+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:50 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:51 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:52.000+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:52 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:53 compute-2 ceph-mon[77081]: pgmap v2956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:53 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:53.035+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:53 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:54 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:54 compute-2 ceph-mon[77081]: pgmap v2957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:54.086+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:54 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:55 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:55.088+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:55 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 15:05:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:55 compute-2 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:05:56 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:56 compute-2 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:05:56 compute-2 ceph-mon[77081]: pgmap v2958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:56.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:56 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:57.109+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:57 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:57 compute-2 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:05:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:05:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:57.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:05:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:58 compute-2 podman[269726]: 2026-01-22 15:05:58.030198234 +0000 UTC m=+0.081805146 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 15:05:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:58.100+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:58 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:58 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:58 compute-2 ceph-mon[77081]: pgmap v2959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:05:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:59.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:59 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:05:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:59.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:05:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:05:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:59.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:05:59 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:05:59 compute-2 sudo[269747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:59 compute-2 sudo[269747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:59 compute-2 sudo[269747]: pam_unix(sudo:session): session closed for user root
Jan 22 15:05:59 compute-2 sudo[269772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:05:59 compute-2 sudo[269772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:05:59 compute-2 sudo[269772]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:00.085+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:00 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:00 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:00 compute-2 ceph-mon[77081]: pgmap v2960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:00 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:00 compute-2 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:01.087+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:01 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:01 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:02.115+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:02 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:02 compute-2 ceph-mon[77081]: pgmap v2961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:03.144+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:03 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:03 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:04.121+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:04 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:04 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:04 compute-2 ceph-mon[77081]: pgmap v2962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:05.099+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:05 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:05.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:05 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:05 compute-2 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:06.142+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:06 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:06 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:06 compute-2 ceph-mon[77081]: pgmap v2963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:07.104+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:07 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:07.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:07 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 15:06:08 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:08.102+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:08 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:08 compute-2 ceph-mon[77081]: pgmap v2964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:09 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:09.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:09.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:09 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:10 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:10.184+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:10 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:10 compute-2 ceph-mon[77081]: pgmap v2965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:10 compute-2 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:11.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:11 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:11.230+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:11 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:12 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:12.241+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:12 compute-2 ceph-mon[77081]: pgmap v2966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:12 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:13 compute-2 podman[269804]: 2026-01-22 15:06:13.080028824 +0000 UTC m=+0.119024597 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 22 15:06:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:13.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:13 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:13.272+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:13 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:14 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:14.260+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:14 compute-2 ceph-mon[77081]: pgmap v2967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:14 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:15.297+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:15 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:15 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:15 compute-2 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:16.285+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:16 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:16 compute-2 ceph-mon[77081]: pgmap v2968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:16 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:17.255+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:17 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:17 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:18.241+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:18 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:18 compute-2 ceph-mon[77081]: pgmap v2969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:18 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:06:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2975360653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:06:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:19.216+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:19 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:19 compute-2 sudo[269834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:19 compute-2 sudo[269834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:19 compute-2 sudo[269834]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:19 compute-2 sudo[269859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:19 compute-2 sudo[269859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:19 compute-2 sudo[269859]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:20 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:20.168+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:20 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.187834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380187875, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1478, "num_deletes": 250, "total_data_size": 2770066, "memory_usage": 2825512, "flush_reason": "Manual Compaction"}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380200772, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 1191330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88906, "largest_seqno": 90379, "table_properties": {"data_size": 1186443, "index_size": 2090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14800, "raw_average_key_size": 21, "raw_value_size": 1174975, "raw_average_value_size": 1727, "num_data_blocks": 89, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094290, "oldest_key_time": 1769094290, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 12988 microseconds, and 6771 cpu microseconds.
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.200822) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 1191330 bytes OK
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.200844) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.203189) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.203209) EVENT_LOG_v1 {"time_micros": 1769094380203202, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.203230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 2763022, prev total WAL file size 2763022, number of live WAL files 2.
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.204843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(1163KB)], [183(11MB)]
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380204956, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13178857, "oldest_snapshot_seqno": -1}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 13924 keys, 9876587 bytes, temperature: kUnknown
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380301806, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 9876587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9804398, "index_size": 36300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34821, "raw_key_size": 383446, "raw_average_key_size": 27, "raw_value_size": 9571032, "raw_average_value_size": 687, "num_data_blocks": 1296, "num_entries": 13924, "num_filter_entries": 13924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.302155) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9876587 bytes
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.303701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.0 rd, 101.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.3) OK, records in: 14402, records dropped: 478 output_compression: NoCompression
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.303738) EVENT_LOG_v1 {"time_micros": 1769094380303722, "job": 118, "event": "compaction_finished", "compaction_time_micros": 96937, "compaction_time_cpu_micros": 53246, "output_level": 6, "num_output_files": 1, "total_output_size": 9876587, "num_input_records": 14402, "num_output_records": 13924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380304397, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380308610, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.204693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:06:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:21.122+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:21 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:21 compute-2 ceph-mon[77081]: pgmap v2970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:21 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:21 compute-2 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:21.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:22.080+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:22 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:22 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:22 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:22 compute-2 ceph-mon[77081]: pgmap v2971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:22 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:23.111+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:23 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 15:06:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:23 compute-2 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:24.071+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:24 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:24 compute-2 sudo[269886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:24 compute-2 sudo[269886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-2 sudo[269886]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:24 compute-2 sudo[269911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:06:24 compute-2 sudo[269911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-2 sudo[269911]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:24 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:24 compute-2 ceph-mon[77081]: pgmap v2972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:24 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:24 compute-2 sudo[269936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:24 compute-2 sudo[269936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:24 compute-2 sudo[269936]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:24 compute-2 sudo[269961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:06:24 compute-2 sudo[269961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:25.082+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:25 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:25 compute-2 sudo[269961]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:25 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:25 compute-2 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:26.064+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:26 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:26 compute-2 ceph-mon[77081]: pgmap v2973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:26 compute-2 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:06:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:06:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:06:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:06:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:06:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:06:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:27.100+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:27 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:27.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:28.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:28 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:28 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:06:28.233 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:06:28 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:06:28.234 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:06:28 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:28 compute-2 ceph-mon[77081]: pgmap v2974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:29 compute-2 podman[270020]: 2026-01-22 15:06:29.028875672 +0000 UTC m=+0.081413085 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:06:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:29.182+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:29 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:29.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:29 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:30.232+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:30 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:30 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:30 compute-2 ceph-mon[77081]: pgmap v2975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:30 compute-2 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:06:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.5 total, 600.0 interval
                                           Cumulative writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3884 syncs, 3.13 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1017 writes, 2173 keys, 1017 commit groups, 1.0 writes per commit group, ingest: 1.28 MB, 0.00 MB/s
                                           Interval WAL: 1017 writes, 473 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:06:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:31.267+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:31 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:31.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:31 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:32.235+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:32 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:32 compute-2 ceph-mon[77081]: pgmap v2976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:32 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:33.187+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:33 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:33.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:33 compute-2 sudo[270041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:33 compute-2 sudo[270041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:33 compute-2 sudo[270041]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:33 compute-2 sudo[270066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:06:33 compute-2 sudo[270066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:33 compute-2 sudo[270066]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:33 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:06:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:34.225+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:34 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:34 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:06:34.235 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:06:34 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:34 compute-2 ceph-mon[77081]: pgmap v2977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:35.204+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:35 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:35 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:35 compute-2 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:36.209+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:36 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:37 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:37 compute-2 ceph-mon[77081]: pgmap v2978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:37.250+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:37 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:37.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:37.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:38 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:38 compute-2 ceph-mon[77081]: pgmap v2979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:38.263+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:38 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:39 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:39.288+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:39 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:39.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:39 compute-2 sudo[270094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:39 compute-2 sudo[270094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:39 compute-2 sudo[270094]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:40 compute-2 sudo[270119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:06:40 compute-2 sudo[270119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:06:40 compute-2 sudo[270119]: pam_unix(sudo:session): session closed for user root
Jan 22 15:06:40 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:40 compute-2 ceph-mon[77081]: pgmap v2980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:40.253+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:40 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:41 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:41 compute-2 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:41 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:41.210+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:41 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:42 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:42 compute-2 ceph-mon[77081]: pgmap v2981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:42.244+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:42 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:43 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:43.231+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:43 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:44 compute-2 podman[270146]: 2026-01-22 15:06:44.055061431 +0000 UTC m=+0.109107007 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:06:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:44.229+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:44 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:44 compute-2 ceph-mon[77081]: pgmap v2982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:44 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:45.199+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:45 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:45.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:45 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:45 compute-2 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:46.209+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:46 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:46 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:46 compute-2 ceph-mon[77081]: pgmap v2983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:47.184+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:47 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:06:47.243 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:06:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:06:47.243 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:06:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:06:47.243 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:06:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:47 compute-2 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 15:06:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:48.153+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:48 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:48 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:48 compute-2 ceph-mon[77081]: pgmap v2984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:49.182+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:49 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:49 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:50.144+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:50 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:50 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:50 compute-2 ceph-mon[77081]: pgmap v2985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:50 compute-2 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:51.193+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:51 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:06:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:06:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:52 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:52.174+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:52 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:53.142+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:53 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:53 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:53 compute-2 ceph-mon[77081]: pgmap v2986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:53.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:54.163+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:54 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:54 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:54 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:54 compute-2 ceph-mon[77081]: pgmap v2987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:55.124+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:55 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:55.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:55 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:55 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5402 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:06:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:06:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:56.153+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:56 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:56 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:56 compute-2 ceph-mon[77081]: pgmap v2988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:57.158+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:57 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:06:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:57.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:06:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:57 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:58 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:58.146+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:58 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:58 compute-2 ceph-mon[77081]: pgmap v2989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:06:59 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:06:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:59.142+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:06:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:06:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:06:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:59.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:06:59 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:00 compute-2 podman[270180]: 2026-01-22 15:07:00.019092588 +0000 UTC m=+0.074681659 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:07:00 compute-2 sudo[270197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:00 compute-2 sudo[270197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:00 compute-2 sudo[270197]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:00 compute-2 sudo[270222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:00 compute-2 sudo[270222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:00 compute-2 sudo[270222]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:00.186+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:00 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:00 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:00 compute-2 ceph-mon[77081]: pgmap v2990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:00 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:01.211+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:01 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:07:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:01.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:01 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:02.171+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:02 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:02 compute-2 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 15:07:02 compute-2 ceph-mon[77081]: pgmap v2991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 15:07:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:03.151+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:03 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:03 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:04.174+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:04 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:04 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:04 compute-2 ceph-mon[77081]: pgmap v2992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Jan 22 15:07:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:05.161+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:05 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:05.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:06.209+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:06 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:06 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:06 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:07 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:07 compute-2 ceph-mon[77081]: pgmap v2993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:07 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:07.226+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:07 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:08 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:08 compute-2 ceph-mon[77081]: pgmap v2994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:08.253+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:08 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:09 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:09.277+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:09 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:09.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:10 compute-2 ceph-mon[77081]: pgmap v2995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:10 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:10.229+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:10 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:11 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:11 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:11.277+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:11 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:11.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:12 compute-2 ceph-mon[77081]: pgmap v2996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 159 op/s
Jan 22 15:07:12 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:12.325+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:12 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:13 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:13.352+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:13 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:14 compute-2 ceph-mon[77081]: pgmap v2997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Jan 22 15:07:14 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:14.332+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:14 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:14 compute-2 podman[270254]: 2026-01-22 15:07:14.83106425 +0000 UTC m=+0.135221158 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:07:15 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:15 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:15.378+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:15 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:15.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:16 compute-2 ceph-mon[77081]: pgmap v2998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Jan 22 15:07:16 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:16.364+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:16 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:17 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:07:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:17.337+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:17 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000078s ======
Jan 22 15:07:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:17.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Jan 22 15:07:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:17.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:18.353+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:18 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:18 compute-2 ceph-mon[77081]: pgmap v2999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:18 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:07:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:07:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:07:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:07:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:19.341+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:19 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:19 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:07:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:07:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:19.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:20 compute-2 sudo[270284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:20 compute-2 sudo[270284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:20 compute-2 sudo[270284]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:20.355+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:20 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:20 compute-2 sudo[270309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:20 compute-2 sudo[270309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:20 compute-2 sudo[270309]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:20 compute-2 ceph-mon[77081]: pgmap v3000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:20 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:20 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:21.347+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:21 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:21.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:21 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:22.327+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:22 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:23.320+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:23 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 15:07:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:23 compute-2 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:23 compute-2 ceph-mon[77081]: pgmap v3001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:23 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:24.312+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:24 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:24 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:24 compute-2 ceph-mon[77081]: pgmap v3002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:24 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:25.290+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:25 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:25 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:25 compute-2 ceph-mon[77081]: Health check update: 81 slow ops, oldest one blocked for 5432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:25 compute-2 systemd[1]: Starting dnf makecache...
Jan 22 15:07:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:26 compute-2 dnf[270337]: Metadata cache refreshed recently.
Jan 22 15:07:26 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 15:07:26 compute-2 systemd[1]: Finished dnf makecache.
Jan 22 15:07:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:26.295+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:26 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:26 compute-2 ceph-mon[77081]: pgmap v3003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:26 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:27.280+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:27 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:27 compute-2 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 15:07:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:27.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:28.264+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:28 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:28 compute-2 ceph-mon[77081]: pgmap v3004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:28 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:29.309+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:29 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:29.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:07:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:29.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:07:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:30.359+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:30 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:30 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-2 podman[270341]: 2026-01-22 15:07:31.018380831 +0000 UTC m=+0.073279622 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:07:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:31.385+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:31 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-2 ceph-mon[77081]: pgmap v3005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:31 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:31 compute-2 ceph-mon[77081]: Health check update: 81 slow ops, oldest one blocked for 5437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:32.417+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:32 compute-2 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:32 compute-2 ceph-mon[77081]: pgmap v3006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:32 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 22 15:07:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:33.387+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:33 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:33 compute-2 sudo[270361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:33 compute-2 sudo[270361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-2 sudo[270361]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:33.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:33 compute-2 sudo[270386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:07:33 compute-2 sudo[270386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-2 sudo[270386]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:33 compute-2 sudo[270411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:33 compute-2 sudo[270411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:33 compute-2 sudo[270411]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:33 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:33 compute-2 ceph-mon[77081]: osdmap e174: 3 total, 3 up, 3 in
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.944706) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453944755, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1275, "num_deletes": 306, "total_data_size": 2195447, "memory_usage": 2237056, "flush_reason": "Manual Compaction"}
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453954651, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 1441669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90384, "largest_seqno": 91654, "table_properties": {"data_size": 1436443, "index_size": 2429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14759, "raw_average_key_size": 21, "raw_value_size": 1424591, "raw_average_value_size": 2076, "num_data_blocks": 104, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 306, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094381, "oldest_key_time": 1769094381, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 9968 microseconds, and 3829 cpu microseconds.
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.954683) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 1441669 bytes OK
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.954697) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956464) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956477) EVENT_LOG_v1 {"time_micros": 1769094453956473, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2189000, prev total WAL file size 2189000, number of live WAL files 2.
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(1407KB)], [186(9645KB)]
Jan 22 15:07:33 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453957189, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 11318256, "oldest_snapshot_seqno": -1}
Jan 22 15:07:33 compute-2 sudo[270436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:07:33 compute-2 sudo[270436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 13979 keys, 9685429 bytes, temperature: kUnknown
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454034131, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 9685429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9612843, "index_size": 36505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 385337, "raw_average_key_size": 27, "raw_value_size": 9378605, "raw_average_value_size": 670, "num_data_blocks": 1301, "num_entries": 13979, "num_filter_entries": 13979, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.034404) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9685429 bytes
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.035371) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.0 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 14610, records dropped: 631 output_compression: NoCompression
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.035386) EVENT_LOG_v1 {"time_micros": 1769094454035379, "job": 120, "event": "compaction_finished", "compaction_time_micros": 77004, "compaction_time_cpu_micros": 45319, "output_level": 6, "num_output_files": 1, "total_output_size": 9685429, "num_input_records": 14610, "num_output_records": 13979, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454035672, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454037112, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:07:34 compute-2 sudo[270436]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:34.436+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:34 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:35 compute-2 ceph-mon[77081]: pgmap v3008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:07:35 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:35.473+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:35 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:35.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:36 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:07:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:07:36 compute-2 ceph-mon[77081]: pgmap v3009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:36 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:36 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:07:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:07:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:07:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:36.425+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:36 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:37 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:37.455+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:37 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:37.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:38.426+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:38 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:38 compute-2 ceph-mon[77081]: pgmap v3010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:38 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:39.410+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:39 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:39 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:39.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:40.396+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:40 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:40 compute-2 sudo[270495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:40 compute-2 sudo[270495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:40 compute-2 sudo[270495]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:40 compute-2 sudo[270520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:40 compute-2 sudo[270520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:40 compute-2 sudo[270520]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:40 compute-2 ceph-mon[77081]: pgmap v3011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:40 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:40 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:41.434+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:41 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:41.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 15:07:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 15:07:41 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:07:42 compute-2 sudo[270546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:07:42 compute-2 sudo[270546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:42 compute-2 sudo[270546]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:42 compute-2 sudo[270571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:07:42 compute-2 sudo[270571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:07:42 compute-2 sudo[270571]: pam_unix(sudo:session): session closed for user root
Jan 22 15:07:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:42.405+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:42 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:43 compute-2 ceph-mon[77081]: pgmap v3012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 22 15:07:43 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:43.361+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:43 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:44.357+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:44 compute-2 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:44 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:44 compute-2 ceph-mon[77081]: pgmap v3013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.9 MiB/s wr, 17 op/s
Jan 22 15:07:44 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 22 15:07:45 compute-2 podman[270598]: 2026-01-22 15:07:45.078000439 +0000 UTC m=+0.127210089 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:07:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:45.392+0000 7f47f8ed4640 -1 osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:45 compute-2 ceph-osd[79779]: osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:45.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:46 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:46 compute-2 ceph-mon[77081]: osdmap e175: 3 total, 3 up, 3 in
Jan 22 15:07:46 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5452 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:46.415+0000 7f47f8ed4640 -1 osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:46 compute-2 ceph-osd[79779]: osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:47 compute-2 ceph-mon[77081]: pgmap v3015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 22 15:07:47 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:07:47.244 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:07:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:07:47.245 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:07:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:07:47.246 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:07:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:47.424+0000 7f47f8ed4640 -1 osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:47 compute-2 ceph-osd[79779]: osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:47.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:48 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:48 compute-2 ceph-mon[77081]: pgmap v3016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 22 15:07:48 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 22 15:07:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:48.399+0000 7f47f8ed4640 -1 osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:48 compute-2 ceph-osd[79779]: osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:49.376+0000 7f47f8ed4640 -1 osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:49 compute-2 ceph-osd[79779]: osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:49 compute-2 ceph-mon[77081]: osdmap e176: 3 total, 3 up, 3 in
Jan 22 15:07:49 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:49.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:50.343+0000 7f47f8ed4640 -1 osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:50 compute-2 ceph-osd[79779]: osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:50 compute-2 ceph-mon[77081]: pgmap v3018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 860 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 32 op/s
Jan 22 15:07:50 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 22 15:07:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:51.353+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:51 compute-2 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:52 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:52 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:52 compute-2 ceph-mon[77081]: osdmap e177: 3 total, 3 up, 3 in
Jan 22 15:07:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:52.318+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:52 compute-2 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:53 compute-2 ceph-mon[77081]: pgmap v3020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.1 MiB/s wr, 66 op/s
Jan 22 15:07:53 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:53 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:53.276+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:53 compute-2 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:54 compute-2 ceph-mon[77081]: pgmap v3021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Jan 22 15:07:54 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:54.274+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:54 compute-2 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:55.270+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:55 compute-2 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 22 15:07:55 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:55.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:07:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:56.250+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:56 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:57 compute-2 ceph-mon[77081]: pgmap v3022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 41 op/s
Jan 22 15:07:57 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:57 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:07:57 compute-2 ceph-mon[77081]: osdmap e178: 3 total, 3 up, 3 in
Jan 22 15:07:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:57.259+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:57 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:07:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:07:57.625 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:07:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:07:57.626 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:07:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:57.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:07:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:07:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:58.245+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:58 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:07:58 compute-2 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 15:07:58 compute-2 ceph-mon[77081]: pgmap v3024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 868 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.0 MiB/s wr, 32 op/s
Jan 22 15:07:58 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:07:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:59.239+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:59 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:07:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:07:59 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:07:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:59.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:07:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:07:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:07:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:59.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:00.234+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:00 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:00 compute-2 sudo[270632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:00 compute-2 sudo[270632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:00 compute-2 sudo[270632]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:00 compute-2 sudo[270657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:00 compute-2 sudo[270657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:00 compute-2 sudo[270657]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:01.279+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:01 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:01 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:08:01.628 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:08:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:01.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:01 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:01 compute-2 ceph-mon[77081]: pgmap v3025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 855 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 718 B/s wr, 11 op/s
Jan 22 15:08:01 compute-2 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:02 compute-2 podman[270683]: 2026-01-22 15:08:02.006393475 +0000 UTC m=+0.073260502 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 15:08:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:02.239+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:02 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:03 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:03 compute-2 ceph-mon[77081]: pgmap v3026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 15:08:03 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:03.238+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:03 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:04.208+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:04 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:04 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:05.214+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:05 compute-2 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:05 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:05 compute-2 ceph-mon[77081]: pgmap v3027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 22 15:08:05 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:05.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:05.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 22 15:08:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:06.202+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:07 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:07 compute-2 ceph-mon[77081]: pgmap v3028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 22 15:08:07 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:07 compute-2 ceph-mon[77081]: osdmap e179: 3 total, 3 up, 3 in
Jan 22 15:08:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:07.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:08.200+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:08 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:08 compute-2 ceph-mon[77081]: pgmap v3030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Jan 22 15:08:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:09.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:09 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:09 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:09.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:10.168+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.620593) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490620655, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 842, "num_deletes": 329, "total_data_size": 1269048, "memory_usage": 1293432, "flush_reason": "Manual Compaction"}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490634114, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 822633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91659, "largest_seqno": 92496, "table_properties": {"data_size": 818649, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11369, "raw_average_key_size": 21, "raw_value_size": 809707, "raw_average_value_size": 1502, "num_data_blocks": 65, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 329, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094454, "oldest_key_time": 1769094454, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 13585 microseconds, and 4315 cpu microseconds.
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.634188) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 822633 bytes OK
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.634207) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.635956) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.635971) EVENT_LOG_v1 {"time_micros": 1769094490635966, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.636017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1264241, prev total WAL file size 1264241, number of live WAL files 2.
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.636912) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323734' seq:72057594037927935, type:22 .. '6C6F676D0034353331' seq:0, type:0; will stop at (end)
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(803KB)], [189(9458KB)]
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490636984, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 10508062, "oldest_snapshot_seqno": -1}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 13845 keys, 10339609 bytes, temperature: kUnknown
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490742977, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 10339609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10266777, "index_size": 37135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34629, "raw_key_size": 383304, "raw_average_key_size": 27, "raw_value_size": 10033513, "raw_average_value_size": 724, "num_data_blocks": 1325, "num_entries": 13845, "num_filter_entries": 13845, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.743273) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10339609 bytes
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.747713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.1 rd, 97.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(25.3) write-amplify(12.6) OK, records in: 14518, records dropped: 673 output_compression: NoCompression
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.747752) EVENT_LOG_v1 {"time_micros": 1769094490747737, "job": 122, "event": "compaction_finished", "compaction_time_micros": 106056, "compaction_time_cpu_micros": 39521, "output_level": 6, "num_output_files": 1, "total_output_size": 10339609, "num_input_records": 14518, "num_output_records": 13845, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490748051, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490749768, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.636801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:10 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:08:11 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:11 compute-2 ceph-mon[77081]: pgmap v3031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Jan 22 15:08:11 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:11.129+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:11.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:12.152+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:12 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:13.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:13.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:13 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:13 compute-2 ceph-mon[77081]: pgmap v3032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:13 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:14.195+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:15 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:15 compute-2 ceph-mon[77081]: pgmap v3033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:15.170+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:16 compute-2 podman[270710]: 2026-01-22 15:08:16.064067809 +0000 UTC m=+0.110052722 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:08:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:16.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:16 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:16 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:17.227+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:17.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:18 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:18 compute-2 ceph-mon[77081]: pgmap v3034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:18 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:18.194+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:19.208+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:19 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:19 compute-2 ceph-mon[77081]: pgmap v3035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:08:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3992916383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:08:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:19.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:20.199+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:20 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:20 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:20 compute-2 ceph-mon[77081]: pgmap v3036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:20 compute-2 sudo[270738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:20 compute-2 sudo[270738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:20 compute-2 sudo[270738]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:20 compute-2 sudo[270763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:20 compute-2 sudo[270763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:20 compute-2 sudo[270763]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:21.162+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:21.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:22.132+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:22 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:22 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:23.601+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:23.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:23 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:23 compute-2 ceph-mon[77081]: pgmap v3037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:23 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:24.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:25 compute-2 ceph-mon[77081]: pgmap v3038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:25 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:25.538+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:25.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:25.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:26.516+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:27.504+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:27 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:27 compute-2 ceph-mon[77081]: pgmap v3039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:27 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:27.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:27.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:28.485+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-2 ceph-mon[77081]: pgmap v3040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:29 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:29.457+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:29.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:29.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:30.481+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:30 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:30 compute-2 ceph-mon[77081]: pgmap v3041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:30 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:31.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:31.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:32 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:32 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:32.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:33 compute-2 podman[270795]: 2026-01-22 15:08:33.032798043 +0000 UTC m=+0.085867831 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:08:33 compute-2 ceph-mon[77081]: pgmap v3042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:33 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:33.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:33.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:34 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:34 compute-2 ceph-mon[77081]: pgmap v3043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:34 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:34.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:35 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:35.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:35.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:35.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:36.456+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:36 compute-2 ceph-mon[77081]: pgmap v3044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:36 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:36 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:37.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:37 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:37.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:38.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:38 compute-2 ceph-mon[77081]: pgmap v3045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:38 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:39.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:39.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:39.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:40.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:40 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:40 compute-2 sudo[270820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:40 compute-2 sudo[270820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:40 compute-2 sudo[270820]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:41 compute-2 sudo[270845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:41 compute-2 sudo[270845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:41 compute-2 sudo[270845]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:41.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:41.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:41.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:42 compute-2 ceph-mon[77081]: pgmap v3046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:42 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:42 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:42 compute-2 sudo[270870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:42 compute-2 sudo[270870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-2 sudo[270870]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:42 compute-2 sudo[270895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:08:42 compute-2 sudo[270895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-2 sudo[270895]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:42 compute-2 sudo[270920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:42 compute-2 sudo[270920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-2 sudo[270920]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:42 compute-2 sudo[270945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:08:42 compute-2 sudo[270945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:42.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:42 compute-2 sudo[270945]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:43 compute-2 sudo[271000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:43 compute-2 sudo[271000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:43 compute-2 sudo[271000]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:43 compute-2 sudo[271025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:08:43 compute-2 sudo[271025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:43 compute-2 sudo[271025]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:43 compute-2 sudo[271050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:43 compute-2 sudo[271050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:43 compute-2 sudo[271050]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:43 compute-2 sudo[271075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 15:08:43 compute-2 sudo[271075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:43.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.59185786 +0000 UTC m=+0.045108268 container create e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 15:08:43 compute-2 systemd[1]: Started libpod-conmon-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope.
Jan 22 15:08:43 compute-2 systemd[1]: Started libcrun container.
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.568550832 +0000 UTC m=+0.021801270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.680880533 +0000 UTC m=+0.134130941 container init e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.690883083 +0000 UTC m=+0.144133471 container start e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.695189956 +0000 UTC m=+0.148440344 container attach e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 15:08:43 compute-2 elastic_bhaskara[271156]: 167 167
Jan 22 15:08:43 compute-2 systemd[1]: libpod-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope: Deactivated successfully.
Jan 22 15:08:43 compute-2 conmon[271156]: conmon e9e50f4d6ab855d42868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope/container/memory.events
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.700475624 +0000 UTC m=+0.153726022 container died e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 15:08:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-ff7d6455c8781e90dcd3729e7d6520ca745d1f80029a715bcb8de0364eef1e50-merged.mount: Deactivated successfully.
Jan 22 15:08:43 compute-2 podman[271140]: 2026-01-22 15:08:43.74978793 +0000 UTC m=+0.203038308 container remove e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 15:08:43 compute-2 systemd[1]: libpod-conmon-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope: Deactivated successfully.
Jan 22 15:08:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:43 compute-2 ceph-mon[77081]: pgmap v3047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:43 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:43 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:43 compute-2 podman[271181]: 2026-01-22 15:08:43.964303086 +0000 UTC m=+0.046751350 container create 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 15:08:44 compute-2 podman[271181]: 2026-01-22 15:08:43.946292666 +0000 UTC m=+0.028740970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:08:44 compute-2 systemd[1]: Started libpod-conmon-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope.
Jan 22 15:08:44 compute-2 systemd[1]: Started libcrun container.
Jan 22 15:08:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 15:08:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 15:08:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 15:08:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 15:08:44 compute-2 podman[271181]: 2026-01-22 15:08:44.12734018 +0000 UTC m=+0.209788494 container init 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 15:08:44 compute-2 podman[271181]: 2026-01-22 15:08:44.135639776 +0000 UTC m=+0.218088050 container start 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 15:08:44 compute-2 podman[271181]: 2026-01-22 15:08:44.139346883 +0000 UTC m=+0.221795207 container attach 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 15:08:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:44.516+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:44 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:44 compute-2 ceph-mon[77081]: pgmap v3048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:44 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-2 inspiring_borg[271197]: [
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:     {
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "available": false,
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "ceph_device": false,
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "lsm_data": {},
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "lvs": [],
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "path": "/dev/sr0",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "rejected_reasons": [
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "Has a FileSystem",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "Insufficient space (<5GB)"
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         ],
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         "sys_api": {
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "actuators": null,
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "device_nodes": "sr0",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "devname": "sr0",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "human_readable_size": "482.00 KB",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "id_bus": "ata",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "model": "QEMU DVD-ROM",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "nr_requests": "2",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "parent": "/dev/sr0",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "partitions": {},
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "path": "/dev/sr0",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "removable": "1",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "rev": "2.5+",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "ro": "0",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "rotational": "1",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "sas_address": "",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "sas_device_handle": "",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "scheduler_mode": "mq-deadline",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "sectors": 0,
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "sectorsize": "2048",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "size": 493568.0,
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "support_discard": "2048",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "type": "disk",
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:             "vendor": "QEMU"
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:         }
Jan 22 15:08:45 compute-2 inspiring_borg[271197]:     }
Jan 22 15:08:45 compute-2 inspiring_borg[271197]: ]
Jan 22 15:08:45 compute-2 systemd[1]: libpod-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope: Deactivated successfully.
Jan 22 15:08:45 compute-2 systemd[1]: libpod-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope: Consumed 1.126s CPU time.
Jan 22 15:08:45 compute-2 podman[271181]: 2026-01-22 15:08:45.25261767 +0000 UTC m=+1.335065964 container died 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 15:08:45 compute-2 systemd[1]: var-lib-containers-storage-overlay-2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700-merged.mount: Deactivated successfully.
Jan 22 15:08:45 compute-2 podman[271181]: 2026-01-22 15:08:45.307992659 +0000 UTC m=+1.390440923 container remove 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 15:08:45 compute-2 systemd[1]: libpod-conmon-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope: Deactivated successfully.
Jan 22 15:08:45 compute-2 sudo[271075]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:45.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:45.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:45 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:08:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:08:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:46.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:47 compute-2 podman[272291]: 2026-01-22 15:08:47.058022631 +0000 UTC m=+0.106121607 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 15:08:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:08:47.245 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:08:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:08:47.246 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:08:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:08:47.246 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:08:47 compute-2 ceph-mon[77081]: pgmap v3049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:47 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:47.580+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:47.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:48 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:48 compute-2 ceph-mon[77081]: pgmap v3050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:48 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:48.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:49.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:49 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:49.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:50.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:51 compute-2 ceph-mon[77081]: pgmap v3051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:51 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:51 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:51.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:51.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:52 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:52 compute-2 ceph-mon[77081]: pgmap v3052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:52 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:52.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:53.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:53 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:53.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:54.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:54 compute-2 ceph-mon[77081]: pgmap v3053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:54 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:55.720+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:08:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:08:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:08:56 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:56 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:08:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:56.683+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:57.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:57 compute-2 ceph-mon[77081]: pgmap v3054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:57 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:57 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:57.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:58 compute-2 sudo[272324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:08:58 compute-2 sudo[272324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:58 compute-2 sudo[272324]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:58 compute-2 sudo[272349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:08:58 compute-2 sudo[272349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:08:58 compute-2 sudo[272349]: pam_unix(sudo:session): session closed for user root
Jan 22 15:08:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:58.750+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:58 compute-2 ceph-mon[77081]: pgmap v3055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:08:58 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:08:58 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:59.787+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:08:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:08:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:59.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:08:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:08:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:08:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:59.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:00 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:00.762+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-2 sudo[272376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:01 compute-2 sudo[272376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:01 compute-2 sudo[272376]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:01 compute-2 sudo[272401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:01 compute-2 sudo[272401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:01 compute-2 sudo[272401]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:01 compute-2 ceph-mon[77081]: pgmap v3056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:01 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:01 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:01.773+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:01.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:02 compute-2 ceph-mon[77081]: pgmap v3057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:02 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:02.766+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:03 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:03.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:03.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:04 compute-2 podman[272427]: 2026-01-22 15:09:04.014503146 +0000 UTC m=+0.068026091 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:09:04 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:09:04.104 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:09:04 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:09:04.106 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:09:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:04.712+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:05 compute-2 ceph-mon[77081]: pgmap v3058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:05 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:05.723+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:05.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:06 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:06 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:06.674+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:07 compute-2 ceph-mon[77081]: pgmap v3059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:07 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:07 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:07.645+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:07.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:07.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:08 compute-2 ceph-mon[77081]: pgmap v3060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:08 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:08.673+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:09.680+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:10 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:10.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:09:11.108 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:09:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:11 compute-2 ceph-mon[77081]: pgmap v3061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:11 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:11 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:11.746+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:11.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:11.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:12.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:12 compute-2 ceph-mon[77081]: pgmap v3062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:12 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:13.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:13 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:13.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:13.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:14.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:14 compute-2 ceph-mon[77081]: pgmap v3063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:14 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:15.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:15 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:15 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:15.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:16.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:16 compute-2 ceph-mon[77081]: pgmap v3064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:16 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:17.772+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:17 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:17.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:18 compute-2 podman[272457]: 2026-01-22 15:09:18.044280946 +0000 UTC m=+0.098583430 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:09:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:09:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:09:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:09:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:09:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:18.768+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:18 compute-2 ceph-mon[77081]: pgmap v3065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:18 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:09:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:09:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:19.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:19.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:19 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:19.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:20.733+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:20 compute-2 ceph-mon[77081]: pgmap v3066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:20 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:20 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:21 compute-2 sudo[272486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:21 compute-2 sudo[272486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:21 compute-2 sudo[272486]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:21 compute-2 sudo[272511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:21 compute-2 sudo[272511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:21 compute-2 sudo[272511]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:21.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:21.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:22 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:22.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:23 compute-2 ceph-mon[77081]: pgmap v3067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:23 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:23.670+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:24 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:24 compute-2 ceph-mon[77081]: pgmap v3068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:24.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:25 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:25.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:26.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:27 compute-2 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:09:27 compute-2 ceph-mon[77081]: pgmap v3069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:27 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:27 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:27.594+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:28.548+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:28 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:28 compute-2 ceph-mon[77081]: pgmap v3070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:28 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:29.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:29 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:29.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:30.592+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:31 compute-2 ceph-mon[77081]: pgmap v3071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:31 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:31 compute-2 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:31.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:31.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:32 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:32.636+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:33 compute-2 ceph-mon[77081]: pgmap v3072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:33 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:33.644+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:34 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:34 compute-2 ceph-mon[77081]: pgmap v3073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:34.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:35 compute-2 podman[272543]: 2026-01-22 15:09:35.026628609 +0000 UTC m=+0.076743679 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 15:09:35 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:35.629+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:36 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:36 compute-2 ceph-mon[77081]: pgmap v3074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:36 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:36.581+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:37 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:37 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:37.555+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:38 compute-2 ceph-mon[77081]: pgmap v3075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:38 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:38.595+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:39 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:39.588+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:40 compute-2 ceph-mon[77081]: pgmap v3076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:40 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:40.586+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:41 compute-2 sudo[272566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:41 compute-2 sudo[272566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:41 compute-2 sudo[272566]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:41 compute-2 sudo[272591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:41 compute-2 sudo[272591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:41 compute-2 sudo[272591]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:41 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:41 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:41.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:42 compute-2 ceph-mon[77081]: pgmap v3077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:42 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:42.577+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:43 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:43.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:44.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:44.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:44 compute-2 ceph-mon[77081]: pgmap v3078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:44 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:45.609+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:45 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:45 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:46.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:46 compute-2 ceph-mon[77081]: pgmap v3079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:46 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:09:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:09:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:09:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:09:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:09:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:09:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:47.541+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:47 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:48.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:48.509+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:48 compute-2 ceph-mon[77081]: pgmap v3080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:48 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:49 compute-2 podman[272620]: 2026-01-22 15:09:49.021284281 +0000 UTC m=+0.084596085 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 15:09:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:49.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:49 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:49.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:50.492+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:50 compute-2 ceph-mon[77081]: pgmap v3081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:50 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:50 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:51.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:51 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:51.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:52.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:52 compute-2 ceph-mon[77081]: pgmap v3082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:52 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:53.469+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:53.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:54 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:09:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:09:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:54.450+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:55 compute-2 ceph-mon[77081]: pgmap v3083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:55 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:55.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:55.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:56 compute-2 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 15:09:56 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:09:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:09:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:56.454+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:57 compute-2 ceph-mon[77081]: pgmap v3084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:57 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:57.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:09:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:57.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:09:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:58.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:09:58 compute-2 sudo[272650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:58 compute-2 sudo[272650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-2 sudo[272650]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:58 compute-2 ceph-mon[77081]: pgmap v3085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:09:58 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:58 compute-2 sudo[272675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:09:58 compute-2 sudo[272675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-2 sudo[272675]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-2 sudo[272700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:58 compute-2 sudo[272700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-2 sudo[272700]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-2 sudo[272725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:09:58 compute-2 sudo[272725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:58.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:58 compute-2 sudo[272725]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-2 sudo[272771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:58 compute-2 sudo[272771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-2 sudo[272771]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:58 compute-2 sudo[272797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:09:58 compute-2 sudo[272797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:58 compute-2 sudo[272797]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:59 compute-2 sudo[272822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:09:59 compute-2 sudo[272822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:59 compute-2 sudo[272822]: pam_unix(sudo:session): session closed for user root
Jan 22 15:09:59 compute-2 sudo[272847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:09:59 compute-2 sudo[272847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:09:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:59.524+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:09:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:59 compute-2 podman[272943]: 2026-01-22 15:09:59.667497963 +0000 UTC m=+0.065315700 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 15:09:59 compute-2 podman[272943]: 2026-01-22 15:09:59.782732167 +0000 UTC m=+0.180549884 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 15:09:59 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:09:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:09:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:09:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:09:59 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:09:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:09:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:09:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:00.477+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:00 compute-2 podman[273099]: 2026-01-22 15:10:00.586391691 +0000 UTC m=+0.078154016 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:10:00 compute-2 podman[273099]: 2026-01-22 15:10:00.598679282 +0000 UTC m=+0.090441557 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:10:00 compute-2 podman[273164]: 2026-01-22 15:10:00.844971775 +0000 UTC m=+0.062133946 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 22 15:10:00 compute-2 podman[273164]: 2026-01-22 15:10:00.853637392 +0000 UTC m=+0.070799533 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, description=keepalived for Ceph, version=2.2.4)
Jan 22 15:10:00 compute-2 ceph-mon[77081]: pgmap v3086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:00 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 15:10:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 15:10:00 compute-2 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:00 compute-2 sudo[272847]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:01.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:01 compute-2 sudo[273198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:01 compute-2 sudo[273198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:01 compute-2 sudo[273198]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:01 compute-2 sudo[273223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:01 compute-2 sudo[273223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:01 compute-2 sudo[273223]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:01 compute-2 sudo[273248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:01 compute-2 sudo[273248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:01 compute-2 sudo[273248]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:01 compute-2 sudo[273273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:10:01 compute-2 sudo[273273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:01 compute-2 sudo[273273]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:01 compute-2 sudo[273298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:01 compute-2 sudo[273298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:01 compute-2 sudo[273298]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:01 compute-2 sudo[273323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:10:01 compute-2 sudo[273323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:02.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:02.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:02 compute-2 sudo[273323]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:03 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-2 ceph-mon[77081]: pgmap v3087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:03 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:10:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:10:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:10:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:10:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:10:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:03.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:04 compute-2 ceph-mon[77081]: pgmap v3088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:04 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:04.419+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:05 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:10:05.008 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:10:05 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:10:05.009 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:10:05 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:05.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:06.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:06 compute-2 podman[273381]: 2026-01-22 15:10:06.041843959 +0000 UTC m=+0.086067722 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:10:06 compute-2 ceph-mon[77081]: pgmap v3089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:06 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:06 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:06.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:07 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:10:07.011 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:10:07 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:07.440+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:08 compute-2 ceph-mon[77081]: pgmap v3090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:08 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:08.451+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:09 compute-2 sudo[273403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:09 compute-2 sudo[273403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:09 compute-2 sudo[273403]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:09 compute-2 sudo[273428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:10:09 compute-2 sudo[273428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:09 compute-2 sudo[273428]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:09.433+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:09 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:10:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:10.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:10 compute-2 ceph-mon[77081]: pgmap v3091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:10 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:10 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:11.496+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:12 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:12.531+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:12 compute-2 ceph-mon[77081]: pgmap v3092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:12 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:13.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:14 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:14.473+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:15 compute-2 ceph-mon[77081]: pgmap v3093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:15 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:15 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:15.471+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:16.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:16 compute-2 ceph-mon[77081]: pgmap v3094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:16 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:16 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:17.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:18 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:18.415+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:19.401+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:19 compute-2 ceph-mon[77081]: pgmap v3095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:19 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:19 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2399157288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:10:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2399157288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:10:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:19 compute-2 sshd-session[273458]: Connection closed by authenticating user root 45.148.10.121 port 58180 [preauth]
Jan 22 15:10:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:20 compute-2 podman[273460]: 2026-01-22 15:10:20.081957888 +0000 UTC m=+0.132222580 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 15:10:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:20.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:21 compute-2 ceph-mon[77081]: pgmap v3096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:21 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:21.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:21 compute-2 sudo[273488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:21 compute-2 sudo[273488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:21 compute-2 sudo[273488]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:21 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:21 compute-2 sudo[273513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:21 compute-2 sudo[273513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:21 compute-2 sudo[273513]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:21.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 15:10:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 15:10:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:22.492+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:22 compute-2 ceph-mon[77081]: pgmap v3097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:22 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:22 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:23.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:23 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:23.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:24.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:25 compute-2 ceph-mon[77081]: pgmap v3098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:25 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:25.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:10:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:25.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:26 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:26 compute-2 ceph-mon[77081]: pgmap v3099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:26 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 15:10:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:26.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:27 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.381659) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627381697, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2241, "num_deletes": 487, "total_data_size": 4152597, "memory_usage": 4221152, "flush_reason": "Manual Compaction"}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627398795, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 2692767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92501, "largest_seqno": 94737, "table_properties": {"data_size": 2684212, "index_size": 4536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 26787, "raw_average_key_size": 22, "raw_value_size": 2663684, "raw_average_value_size": 2282, "num_data_blocks": 194, "num_entries": 1167, "num_filter_entries": 1167, "num_deletions": 487, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094490, "oldest_key_time": 1769094490, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 17168 microseconds, and 6053 cpu microseconds.
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398831) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 2692767 bytes OK
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398848) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.402160) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.402196) EVENT_LOG_v1 {"time_micros": 1769094627402171, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.402212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 4141391, prev total WAL file size 4141655, number of live WAL files 2.
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.404197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(2629KB)], [192(10097KB)]
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627404249, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 13032376, "oldest_snapshot_seqno": -1}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 14021 keys, 11311235 bytes, temperature: kUnknown
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627472953, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 11311235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11236060, "index_size": 39030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386725, "raw_average_key_size": 27, "raw_value_size": 10998447, "raw_average_value_size": 784, "num_data_blocks": 1405, "num_entries": 14021, "num_filter_entries": 14021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.473240) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11311235 bytes
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.475166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 164.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 15012, records dropped: 991 output_compression: NoCompression
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.475187) EVENT_LOG_v1 {"time_micros": 1769094627475177, "job": 124, "event": "compaction_finished", "compaction_time_micros": 68836, "compaction_time_cpu_micros": 26855, "output_level": 6, "num_output_files": 1, "total_output_size": 11311235, "num_input_records": 15012, "num_output_records": 14021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627475843, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627477798, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.404144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:10:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:27.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:27.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:28 compute-2 ceph-mon[77081]: pgmap v3100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:28 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:28.577+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:29 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:29.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:29.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:30.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:30.550+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:30 compute-2 ceph-mon[77081]: pgmap v3101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:30 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:31.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:31 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:31 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:32.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:32.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:32 compute-2 ceph-mon[77081]: pgmap v3102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:32 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:33.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:10:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:10:34 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:34.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:34.586+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:35 compute-2 ceph-mon[77081]: pgmap v3103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:35 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:35.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:36 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:36 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:36.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:36 compute-2 podman[273546]: 2026-01-22 15:10:36.989928205 +0000 UTC m=+0.052060883 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:10:37 compute-2 ceph-mon[77081]: pgmap v3104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:37 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:37.659+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:38 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:38 compute-2 ceph-mon[77081]: pgmap v3105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:38 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:38.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:39 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:39.679+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:40 compute-2 ceph-mon[77081]: pgmap v3106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:40 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:40.676+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:41 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:41 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:41.703+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:41 compute-2 sudo[273569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:41 compute-2 sudo[273569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:41 compute-2 sudo[273569]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:42 compute-2 sudo[273594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:10:42 compute-2 sudo[273594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:10:42 compute-2 sudo[273594]: pam_unix(sudo:session): session closed for user root
Jan 22 15:10:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:42.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:42 compute-2 ceph-mon[77081]: pgmap v3107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:42 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:43.702+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:44 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:44.679+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:45.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:46.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:46.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:46 compute-2 ceph-mon[77081]: pgmap v3108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:46 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:46 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:10:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:10:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:10:47.248 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:10:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:10:47.248 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:10:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:47.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:48 compute-2 ceph-mon[77081]: pgmap v3109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:48 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:48 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:48 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:48.633+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:49 compute-2 ceph-mon[77081]: pgmap v3110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:49 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:49.649+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:50 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:50 compute-2 ceph-mon[77081]: pgmap v3111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:50.607+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:51 compute-2 podman[273624]: 2026-01-22 15:10:51.003736365 +0000 UTC m=+0.069393766 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:10:51 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:51.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:52.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:52 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:52 compute-2 ceph-mon[77081]: pgmap v3112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:52 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:52.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:53 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:53 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:53.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:54.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:54 compute-2 ceph-mon[77081]: pgmap v3113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:54 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:54.610+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:55 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:55.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:56 compute-2 ceph-mon[77081]: pgmap v3114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:56 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:56.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:10:57 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:57 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:10:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:57.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:10:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:58.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:10:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:10:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:10:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:10:58 compute-2 ceph-mon[77081]: pgmap v3115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:10:58 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:58.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:59 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:10:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:59.617+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:10:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:00.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:00.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:00 compute-2 ceph-mon[77081]: pgmap v3116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:00 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:01.620+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:02.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:02 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:02 compute-2 sudo[273655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:02 compute-2 sudo[273655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:02 compute-2 sudo[273655]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:02 compute-2 sudo[273680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:02 compute-2 sudo[273680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:02 compute-2 sudo[273680]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:02.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:03 compute-2 ceph-mon[77081]: pgmap v3117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:03.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:04.611+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:05 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:05 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:05 compute-2 ceph-mon[77081]: pgmap v3118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:05 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:05.618+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:05 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:05.832 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:11:05 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:05.834 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:11:05 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:05.834 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:11:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:06 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:06.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:06.601+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:07 compute-2 ceph-mon[77081]: pgmap v3119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:07 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:07 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:07.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:08 compute-2 podman[273708]: 2026-01-22 15:11:08.009221195 +0000 UTC m=+0.070555104 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 15:11:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:08 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:08 compute-2 ceph-mon[77081]: pgmap v3120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:08 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:08.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:09 compute-2 sudo[273729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:09 compute-2 sudo[273729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-2 sudo[273729]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:09 compute-2 sudo[273754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:11:09 compute-2 sudo[273754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-2 sudo[273754]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:09 compute-2 sudo[273779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:09 compute-2 sudo[273779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-2 sudo[273779]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:09 compute-2 sudo[273804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:11:09 compute-2 sudo[273804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:09 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:09.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:09 compute-2 sudo[273804]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:10.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:10 compute-2 ceph-mon[77081]: pgmap v3121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:10 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:11:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:11:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:11:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:11:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:11:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:11:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:10.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:11.757+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:11 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:12.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:12.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:12.751+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:12 compute-2 ceph-mon[77081]: pgmap v3122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:12 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:12 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:13.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:14 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:14.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:14.709+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:15 compute-2 ceph-mon[77081]: pgmap v3123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:15 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:15.751+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:16 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-2 ceph-mon[77081]: pgmap v3124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:16 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:16.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:16 compute-2 sudo[273862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:16 compute-2 sudo[273862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:16 compute-2 sudo[273862]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:16 compute-2 sudo[273887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:11:16 compute-2 sudo[273887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:16 compute-2 sudo[273887]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:11:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:11:17 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:17 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:17.757+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:11:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:11:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:18 compute-2 ceph-mon[77081]: pgmap v3125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:18 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:11:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2143322617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:11:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:18.802+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:19 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:19.810+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:20.792+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:20 compute-2 ceph-mon[77081]: pgmap v3126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:20 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:21.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:21 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:22 compute-2 podman[273915]: 2026-01-22 15:11:22.016648433 +0000 UTC m=+0.079936400 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:11:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:22 compute-2 sudo[273942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:22 compute-2 sudo[273942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:22 compute-2 sudo[273942]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:22 compute-2 sudo[273967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:22 compute-2 sudo[273967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:22 compute-2 sudo[273967]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:22.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:23 compute-2 ceph-mon[77081]: pgmap v3127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:23 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:23 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:23.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:24.824+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:25 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:25 compute-2 ceph-mon[77081]: pgmap v3128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:25 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:25.791+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:26.772+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:27 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:27 compute-2 ceph-mon[77081]: pgmap v3129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:27 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:27.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:28 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:28 compute-2 ceph-mon[77081]: pgmap v3130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:28 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:28.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:29.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:29 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:30.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:30 compute-2 ceph-mon[77081]: pgmap v3131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:30 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:31.660+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:32.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:32 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:32 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:33 compute-2 ceph-mon[77081]: pgmap v3132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:33 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:33 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:33.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:34 compute-2 ceph-mon[77081]: pgmap v3133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:34 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:34.560+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:35 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:35.531+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:36.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:36.499+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:36 compute-2 ceph-mon[77081]: pgmap v3134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:36 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:37.538+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:37 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:37 compute-2 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:38.582+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:38 compute-2 ceph-mon[77081]: pgmap v3135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:38 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:11:38 compute-2 podman[274001]: 2026-01-22 15:11:38.984979017 +0000 UTC m=+0.050031558 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:11:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:39.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:40.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:40 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:40.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:41.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:41 compute-2 ceph-mon[77081]: pgmap v3136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:41 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:41 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:42.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:42.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:42 compute-2 sudo[274021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:42 compute-2 sudo[274021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:42 compute-2 sudo[274021]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:42 compute-2 sudo[274046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:11:42 compute-2 sudo[274046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:11:42 compute-2 sudo[274046]: pam_unix(sudo:session): session closed for user root
Jan 22 15:11:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:42.595+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:43 compute-2 ceph-mon[77081]: pgmap v3137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:43 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:43.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:44 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:44.621+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:45 compute-2 ceph-mon[77081]: pgmap v3138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:45 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:45.601+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:46.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:46 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:46.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:46.555+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:47 compute-2 ceph-mon[77081]: pgmap v3139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:47 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:47 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:47.249 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:11:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:47.250 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:11:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:11:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:47.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:48.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:48.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:48 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:48 compute-2 ceph-mon[77081]: pgmap v3140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:48.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:49 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:49 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:49.541+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:50.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:50 compute-2 ceph-mon[77081]: pgmap v3141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:50 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:50.550+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:51 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:51.593+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:52.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:52.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:52 compute-2 ceph-mon[77081]: pgmap v3142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:52 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:52.627+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:53 compute-2 podman[274077]: 2026-01-22 15:11:53.022103708 +0000 UTC m=+0.084620552 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 15:11:53 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:11:53 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:53.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:54.686+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:54 compute-2 ceph-mon[77081]: pgmap v3143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:11:54 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:55.454 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:11:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:55.455 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:11:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:55.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:56.624+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:57 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/902276037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:11:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/902276037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:11:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:11:57.457 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:11:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:11:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:57.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:11:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:58.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:11:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:11:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:11:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:11:58 compute-2 ceph-mon[77081]: pgmap v3144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 15:11:58 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:58 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:58 compute-2 ceph-mon[77081]: pgmap v3145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 10 op/s
Jan 22 15:11:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:58.644+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:59 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:59 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:11:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:59.677+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:11:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:00.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:00.689+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:01 compute-2 ceph-mon[77081]: pgmap v3146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:01 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:01.710+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:02 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:02 compute-2 ceph-mon[77081]: pgmap v3147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:02 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:02 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:02 compute-2 sudo[274107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:02 compute-2 sudo[274107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:02 compute-2 sudo[274107]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:02 compute-2 sudo[274132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:02 compute-2 sudo[274132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:02 compute-2 sudo[274132]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:02.736+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:03.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:04.715+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:05 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:05.669+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:06.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:06.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:06.626+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:06 compute-2 ceph-mon[77081]: pgmap v3148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:06 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:06 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:07.594+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:08.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:08.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:08 compute-2 ceph-mon[77081]: pgmap v3149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 22 15:12:08 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:08 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:08.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:09 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:09 compute-2 ceph-mon[77081]: pgmap v3150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 15:12:09 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:12:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:09.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:10 compute-2 podman[274161]: 2026-01-22 15:12:10.042865914 +0000 UTC m=+0.088725529 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 15:12:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:10.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:10 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:11.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:11 compute-2 ceph-mon[77081]: pgmap v3151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 3 op/s
Jan 22 15:12:11 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:11 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:12.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:12.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:13 compute-2 ceph-mon[77081]: pgmap v3152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:13 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:13 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:13.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:14.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:14.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:14 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:14 compute-2 ceph-mon[77081]: pgmap v3153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:14 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:15.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:16.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:16.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:16 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:16 compute-2 ceph-mon[77081]: pgmap v3154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:16.617+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:16 compute-2 sudo[274184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:16 compute-2 sudo[274184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-2 sudo[274184]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-2 sudo[274209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:12:17 compute-2 sudo[274209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-2 sudo[274209]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-2 sudo[274234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:17 compute-2 sudo[274234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-2 sudo[274234]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:17 compute-2 sudo[274259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:12:17 compute-2 sudo[274259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:17 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:17 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:17 compute-2 ceph-mon[77081]: Health check update: 97 slow ops, oldest one blocked for 5728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:17.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:17 compute-2 sudo[274259]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:18.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:18.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:18.638+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:19 compute-2 ceph-mon[77081]: pgmap v3155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:19 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/478835531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:12:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:19.632+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:20 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:20.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:21.675+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:22.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:22 compute-2 ceph-mon[77081]: pgmap v3156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:22 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:22 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:22 compute-2 sudo[274316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:22 compute-2 sudo[274316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:22 compute-2 sudo[274316]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:22 compute-2 sudo[274342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:22 compute-2 sudo[274342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:22 compute-2 sudo[274342]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:23.662+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:24 compute-2 podman[274367]: 2026-01-22 15:12:24.091611682 +0000 UTC m=+0.145775849 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 22 15:12:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:24.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:24.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:24 compute-2 ceph-mon[77081]: pgmap v3157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:24 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:24 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:24 compute-2 ceph-mon[77081]: Health check update: 97 slow ops, oldest one blocked for 5733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:25.633+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:25 compute-2 ceph-mon[77081]: pgmap v3158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:25 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:25 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:26.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:26.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:26.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:27 compute-2 ceph-mon[77081]: pgmap v3159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:27 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:27.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:28.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:28.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:28 compute-2 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:12:28 compute-2 ceph-mon[77081]: pgmap v3160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:28 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:28.646+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:29.643+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:30 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:30.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:30.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:30.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:31.620+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:32.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:32 compute-2 ceph-mon[77081]: pgmap v3161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:32 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:32 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:32.572+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:33 compute-2 ceph-mon[77081]: pgmap v3162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:33 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:33 compute-2 ceph-mon[77081]: Health check update: 97 slow ops, oldest one blocked for 5738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:33 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:12:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:12:33 compute-2 sudo[274398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:33 compute-2 sudo[274398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:33 compute-2 sudo[274398]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:33.555+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:33 compute-2 sudo[274423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:12:33 compute-2 sudo[274423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:33 compute-2 sudo[274423]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:12:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:12:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:34.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:34 compute-2 ceph-mon[77081]: pgmap v3163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:34 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:35.580+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:35 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:36.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:37 compute-2 ceph-mon[77081]: pgmap v3164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:37 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:37 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:37.592+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:38.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:38.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:38.575+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:38 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:38 compute-2 ceph-mon[77081]: pgmap v3165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:39.609+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:40.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:40 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:40 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:40.591+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:41 compute-2 podman[274452]: 2026-01-22 15:12:41.010403656 +0000 UTC m=+0.064493806 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:12:41 compute-2 ceph-mon[77081]: pgmap v3166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:41 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:41 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:41.572+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:42.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:42.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:43 compute-2 sudo[274473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:43 compute-2 sudo[274473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:43 compute-2 sudo[274473]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:43 compute-2 sudo[274498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:12:43 compute-2 sudo[274498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:12:43 compute-2 sudo[274498]: pam_unix(sudo:session): session closed for user root
Jan 22 15:12:43 compute-2 ceph-mon[77081]: pgmap v3167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:43 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:43.622+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:43 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:43 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:44.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:44.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:45 compute-2 ceph-mon[77081]: pgmap v3168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:45 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:45.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:46 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:46.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:46.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:47 compute-2 ceph-mon[77081]: pgmap v3169: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:47 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:12:47.249 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:12:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:12:47.250 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:12:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:12:47.250 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:12:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:47.603+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:48 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:48.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:48.630+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:49 compute-2 ceph-mon[77081]: pgmap v3170: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:49 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:49.603+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:50 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:50.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:50.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:50.587+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:51 compute-2 ceph-mon[77081]: pgmap v3171: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:51 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:51.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:52.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:52.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:53.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:53 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:53 compute-2 ceph-mon[77081]: pgmap v3172: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:53 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:54.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:54 compute-2 ceph-mon[77081]: pgmap v3173: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:54.557+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:55 compute-2 podman[274529]: 2026-01-22 15:12:55.033260068 +0000 UTC m=+0.097339783 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:12:55 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:55.543+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:56.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:56.523+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:56 compute-2 ceph-mon[77081]: pgmap v3174: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:56 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:12:57.169 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:12:57 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:12:57.170 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:12:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:12:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:57.476+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:12:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:12:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:12:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:12:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:12:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:58.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:12:58 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 15:12:58 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:12:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:12:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:59.508+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:12:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:12:59 compute-2 ceph-mon[77081]: pgmap v3175: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:12:59 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:12:59 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:00.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:00 compute-2 ceph-mon[77081]: pgmap v3176: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:00 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:01.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:01 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:02.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:02.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:02.496+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:02 compute-2 ceph-mon[77081]: pgmap v3177: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:02 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:02 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:03 compute-2 sudo[274560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:03 compute-2 sudo[274560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:03 compute-2 sudo[274560]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:03 compute-2 sudo[274585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:03 compute-2 sudo[274585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:03 compute-2 sudo[274585]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:03.520+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:03 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:04.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:04.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:05 compute-2 ceph-mon[77081]: pgmap v3178: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:05 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:05.450+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:06 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:06.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:06.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:07 compute-2 ceph-mon[77081]: pgmap v3179: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:07 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:07 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:07 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:13:07.172 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:13:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:07.469+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:08.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:08.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:08.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:09 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:09 compute-2 ceph-mon[77081]: pgmap v3180: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:09.546+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:09 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:09 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:10.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:10.589+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:10 compute-2 ceph-mon[77081]: pgmap v3181: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:10 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:11.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:12 compute-2 podman[274614]: 2026-01-22 15:13:12.010007612 +0000 UTC m=+0.065979585 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:13:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:12.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:12 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:12.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:13 compute-2 ceph-mon[77081]: pgmap v3182: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:13 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:13 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:13.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:14.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:14 compute-2 ceph-mon[77081]: pgmap v3183: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:14 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:14.713+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:15 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:15.732+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:16.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:16.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:16.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:17 compute-2 ceph-mon[77081]: pgmap v3184: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:17 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:17 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:17.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:18 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:18.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:18.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:13:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:13:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:13:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:13:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:18.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:19 compute-2 ceph-mon[77081]: pgmap v3185: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:19 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:13:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:13:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:19.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:20.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:20.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:20.812+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:20 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:21.810+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:22 compute-2 ceph-mon[77081]: pgmap v3186: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:22 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:22 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.099542) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802099617, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 2603, "num_deletes": 544, "total_data_size": 4781260, "memory_usage": 4862176, "flush_reason": "Manual Compaction"}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Jan 22 15:13:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802278147, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 3125853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94743, "largest_seqno": 97340, "table_properties": {"data_size": 3116184, "index_size": 5202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30539, "raw_average_key_size": 22, "raw_value_size": 3092778, "raw_average_value_size": 2316, "num_data_blocks": 224, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 544, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094627, "oldest_key_time": 1769094627, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 178665 microseconds, and 8228 cpu microseconds.
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.278215) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 3125853 bytes OK
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.278241) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.290057) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.290082) EVENT_LOG_v1 {"time_micros": 1769094802290075, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.290107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 4768375, prev total WAL file size 4768375, number of live WAL files 2.
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.292159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353330' seq:72057594037927935, type:22 .. '6C6F676D0034373834' seq:0, type:0; will stop at (end)
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(3052KB)], [195(10MB)]
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802292221, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14437088, "oldest_snapshot_seqno": -1}
Jan 22 15:13:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 14255 keys, 14224353 bytes, temperature: kUnknown
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802413145, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 14224353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14144500, "index_size": 43132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 391429, "raw_average_key_size": 27, "raw_value_size": 13899986, "raw_average_value_size": 975, "num_data_blocks": 1579, "num_entries": 14255, "num_filter_entries": 14255, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.413414) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 14224353 bytes
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.414755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 117.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 15356, records dropped: 1101 output_compression: NoCompression
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.414771) EVENT_LOG_v1 {"time_micros": 1769094802414763, "job": 126, "event": "compaction_finished", "compaction_time_micros": 120999, "compaction_time_cpu_micros": 38784, "output_level": 6, "num_output_files": 1, "total_output_size": 14224353, "num_input_records": 15356, "num_output_records": 14255, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802415637, "job": 126, "event": "table_file_deletion", "file_number": 197}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802417545, "job": 126, "event": "table_file_deletion", "file_number": 195}
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.291956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:22.778+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:23 compute-2 ceph-mon[77081]: pgmap v3187: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:23 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:23 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:23 compute-2 sudo[274639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:23 compute-2 sudo[274639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:23 compute-2 sudo[274639]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:23 compute-2 sudo[274664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:23 compute-2 sudo[274664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:23 compute-2 sudo[274664]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:23.785+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:24 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:24 compute-2 ceph-mon[77081]: pgmap v3188: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:24.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:25 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:25 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:25.807+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:26 compute-2 podman[274690]: 2026-01-22 15:13:26.187300704 +0000 UTC m=+0.069853486 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:13:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:26 compute-2 ceph-mon[77081]: pgmap v3189: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:26 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:26.804+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:27 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:27 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:27.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:28 compute-2 ceph-mon[77081]: pgmap v3190: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:28 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:28.743+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:29 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:29.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:30.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:30 compute-2 ceph-mon[77081]: pgmap v3191: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:30 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:30.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:31 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:31.677+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:32.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:32.676+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:32 compute-2 ceph-mon[77081]: pgmap v3192: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:32 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:32 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:33.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:33 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:33 compute-2 sudo[274720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:33 compute-2 sudo[274720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:33 compute-2 sudo[274720]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:33 compute-2 sudo[274745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:13:33 compute-2 sudo[274745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:33 compute-2 sudo[274745]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:33 compute-2 sudo[274770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:33 compute-2 sudo[274770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:33 compute-2 sudo[274770]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:33 compute-2 sudo[274795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:13:33 compute-2 sudo[274795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:34 compute-2 sudo[274795]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.672695) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814672784, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 436, "num_deletes": 274, "total_data_size": 343571, "memory_usage": 353000, "flush_reason": "Manual Compaction"}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814691761, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 224727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97345, "largest_seqno": 97776, "table_properties": {"data_size": 222373, "index_size": 389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6751, "raw_average_key_size": 19, "raw_value_size": 217350, "raw_average_value_size": 635, "num_data_blocks": 17, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094803, "oldest_key_time": 1769094803, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 19119 microseconds, and 2280 cpu microseconds.
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.691824) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 224727 bytes OK
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.691849) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698102) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698130) EVENT_LOG_v1 {"time_micros": 1769094814698122, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698154) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 340725, prev total WAL file size 340725, number of live WAL files 2.
Jan 22 15:13:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:34.696+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.699019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(219KB)], [198(13MB)]
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814699073, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14449080, "oldest_snapshot_seqno": -1}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 14039 keys, 12781472 bytes, temperature: kUnknown
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814855547, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 12781472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12703981, "index_size": 41282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35141, "raw_key_size": 387516, "raw_average_key_size": 27, "raw_value_size": 12463868, "raw_average_value_size": 887, "num_data_blocks": 1497, "num_entries": 14039, "num_filter_entries": 14039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:13:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.855770) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12781472 bytes
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.857271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.3 rd, 81.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(121.2) write-amplify(56.9) OK, records in: 14597, records dropped: 558 output_compression: NoCompression
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.857287) EVENT_LOG_v1 {"time_micros": 1769094814857280, "job": 128, "event": "compaction_finished", "compaction_time_micros": 156534, "compaction_time_cpu_micros": 60105, "output_level": 6, "num_output_files": 1, "total_output_size": 12781472, "num_input_records": 14597, "num_output_records": 14039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814857437, "job": 128, "event": "table_file_deletion", "file_number": 200}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814859811, "job": 128, "event": "table_file_deletion", "file_number": 198}
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:13:34 compute-2 ceph-mon[77081]: pgmap v3193: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:34 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:13:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:13:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:35.686+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:35 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:36.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:36 compute-2 ceph-mon[77081]: pgmap v3194: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:36 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:37.695+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:38 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:38 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:38.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:39 compute-2 ceph-mon[77081]: pgmap v3195: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:39 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:39.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:40.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:40 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:40 compute-2 ceph-mon[77081]: pgmap v3196: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:40.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:41 compute-2 sudo[274855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:41 compute-2 sudo[274855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:41 compute-2 sudo[274855]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:41 compute-2 sudo[274880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:13:41 compute-2 sudo[274880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:41 compute-2 sudo[274880]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:41 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:41 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:13:41 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:13:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:41.681+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:42.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:42 compute-2 ceph-mon[77081]: pgmap v3197: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:42 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:42.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:43 compute-2 podman[274906]: 2026-01-22 15:13:43.000353585 +0000 UTC m=+0.054188657 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 15:13:43 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:43 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:43 compute-2 sudo[274924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:43 compute-2 sudo[274924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:43 compute-2 sudo[274924]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:43 compute-2 sudo[274949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:13:43 compute-2 sudo[274949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:13:43 compute-2 sudo[274949]: pam_unix(sudo:session): session closed for user root
Jan 22 15:13:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:43.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:44.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:44 compute-2 ceph-mon[77081]: pgmap v3198: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:44 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:13:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:13:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:44.694+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:45 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:45.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:46 compute-2 ceph-mon[77081]: pgmap v3199: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:46 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:46.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:13:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:13:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:13:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:13:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:13:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:13:47 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:47.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:48.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:48 compute-2 ceph-mon[77081]: pgmap v3200: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:48 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:48.732+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:49 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:49.739+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:50 compute-2 ceph-mon[77081]: pgmap v3201: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:50 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:50.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:51 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:51.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:13:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:13:52 compute-2 ceph-mon[77081]: pgmap v3202: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:52 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:13:52 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:52.687+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:53 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:53.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:54.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:54.695+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:13:54 compute-2 ceph-mon[77081]: pgmap v3203: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:13:54 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:55.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:56.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:56.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:57 compute-2 podman[274981]: 2026-01-22 15:13:57.069229787 +0000 UTC m=+0.123032505 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:13:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:57.688+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:58.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:13:58.306 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:13:58 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:13:58.307 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:13:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:13:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:13:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:13:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:58.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:58 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:59 compute-2 ceph-mon[77081]: pgmap v3204: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:13:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:13:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:59.687+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:13:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:00.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:00.674+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-2 ceph-mon[77081]: pgmap v3205: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:00 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:00 compute-2 ceph-mon[77081]: pgmap v3206: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:00 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:14:01 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:14:01 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:01.636+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:01 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:02.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:02.649+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:02 compute-2 ceph-mon[77081]: pgmap v3207: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:02 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:02 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:02 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:03 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:14:03.309 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:14:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:03.696+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:03 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:03 compute-2 sudo[275012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:03 compute-2 sudo[275012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:03 compute-2 sudo[275012]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:03 compute-2 sudo[275037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:03 compute-2 sudo[275037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:03 compute-2 sudo[275037]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:04.659+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:05 compute-2 ceph-mon[77081]: pgmap v3208: 305 pgs: 2 active+clean+laggy, 303 active+clean; 848 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 22 15:14:05 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:05.617+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:06 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:06.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:06.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:07 compute-2 ceph-mon[77081]: pgmap v3209: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 22 KiB/s wr, 19 op/s
Jan 22 15:14:07 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:07 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:07.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:08.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:08.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:08 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:08.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:09 compute-2 ceph-mon[77081]: pgmap v3210: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 22 15:14:09 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:09.549+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:10.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:11.483+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:11 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:11 compute-2 ceph-mon[77081]: pgmap v3211: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 694 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:14:11 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:12.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:12.444+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:12 compute-2 ceph-mon[77081]: pgmap v3212: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 21 op/s
Jan 22 15:14:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:13.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:13 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:13 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:13 compute-2 podman[275067]: 2026-01-22 15:14:13.991554463 +0000 UTC m=+0.053705474 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 15:14:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:14.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:14.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:14 compute-2 ceph-mon[77081]: pgmap v3213: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 21 op/s
Jan 22 15:14:14 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:15.408+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:16.403+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:17 compute-2 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:17.445+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:17 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:17 compute-2 ceph-mon[77081]: pgmap v3214: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 KiB/s wr, 36 op/s
Jan 22 15:14:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:18.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:18.460+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:14:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:14:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:19 compute-2 ceph-mon[77081]: pgmap v3215: 305 pgs: 2 active+clean+laggy, 303 active+clean; 847 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 21 op/s
Jan 22 15:14:19 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:19.464+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:20.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:20.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:20 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:21 compute-2 ceph-mon[77081]: pgmap v3216: 305 pgs: 2 active+clean+laggy, 303 active+clean; 864 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 997 KiB/s wr, 35 op/s
Jan 22 15:14:21 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:21.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:22.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:22.410+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:22 compute-2 ceph-mon[77081]: pgmap v3217: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 36 op/s
Jan 22 15:14:22 compute-2 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:23.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:23 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:23 compute-2 sudo[275091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:23 compute-2 sudo[275091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:23 compute-2 sudo[275091]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:24 compute-2 sudo[275116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:24 compute-2 sudo[275116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:24 compute-2 sudo[275116]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:24.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:24.476+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:24 compute-2 ceph-mon[77081]: pgmap v3218: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 22 15:14:24 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:25.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:14:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:26.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:14:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:26.421+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:26 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:26 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:14:26 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1130213286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:14:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:27.376+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:27 compute-2 ceph-mon[77081]: pgmap v3219: 305 pgs: 2 active+clean+laggy, 303 active+clean; 856 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Jan 22 15:14:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:27 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:27 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:28 compute-2 podman[275143]: 2026-01-22 15:14:28.047881605 +0000 UTC m=+0.099729156 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 15:14:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:28.396+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:28.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:29.423+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:30.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:31 compute-2 ceph-mon[77081]: pgmap v3220: 305 pgs: 2 active+clean+laggy, 303 active+clean; 856 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Jan 22 15:14:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:31.386+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:31 compute-2 ceph-mon[77081]: pgmap v3221: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Jan 22 15:14:31 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:32.351+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:33 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:33 compute-2 ceph-mon[77081]: pgmap v3222: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 795 KiB/s wr, 17 op/s
Jan 22 15:14:33 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:33.369+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:34.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:34.379+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:35.351+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:35 compute-2 ceph-mon[77081]: pgmap v3223: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:14:35 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:36.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:36.358+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:36 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:36 compute-2 ceph-mon[77081]: pgmap v3224: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:14:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:37.318+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:37 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:38.359+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:38 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:38 compute-2 ceph-mon[77081]: pgmap v3225: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 596 B/s wr, 4 op/s
Jan 22 15:14:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:39.309+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:40.276+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:40 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:41 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:41 compute-2 ceph-mon[77081]: pgmap v3226: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 4 op/s
Jan 22 15:14:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:41.272+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:41 compute-2 sudo[275176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:41 compute-2 sudo[275176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-2 sudo[275176]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:41 compute-2 sudo[275201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:14:41 compute-2 sudo[275201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-2 sudo[275201]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:41 compute-2 sudo[275226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:41 compute-2 sudo[275226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-2 sudo[275226]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:41 compute-2 sudo[275251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:14:41 compute-2 sudo[275251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:41 compute-2 sudo[275251]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:42.253+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:42 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:42 compute-2 ceph-mon[77081]: pgmap v3227: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:42 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:43.261+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:44 compute-2 sudo[275309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:44 compute-2 sudo[275309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:44 compute-2 sudo[275309]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:44 compute-2 podman[275333]: 2026-01-22 15:14:44.260806197 +0000 UTC m=+0.056900317 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 15:14:44 compute-2 sudo[275340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:44.271+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:14:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:44 compute-2 sudo[275340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:44 compute-2 sudo[275340]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:44 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:14:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:14:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:14:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:14:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:14:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:14:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:45.322+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:45 compute-2 ceph-mon[77081]: pgmap v3228: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:45 compute-2 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:14:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:46.342+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:46 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:46 compute-2 ceph-mon[77081]: pgmap v3229: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209794) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887209867, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 369, "total_data_size": 1953270, "memory_usage": 1977888, "flush_reason": "Manual Compaction"}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887219013, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 847476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97781, "largest_seqno": 99008, "table_properties": {"data_size": 843078, "index_size": 1601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15841, "raw_average_key_size": 23, "raw_value_size": 832092, "raw_average_value_size": 1209, "num_data_blocks": 69, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 369, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094814, "oldest_key_time": 1769094814, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 9263 microseconds, and 3918 cpu microseconds.
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.219064) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 847476 bytes OK
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.219087) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.220935) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.220964) EVENT_LOG_v1 {"time_micros": 1769094887220957, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.220985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1946701, prev total WAL file size 1946701, number of live WAL files 2.
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.221696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303038' seq:0, type:0; will stop at (end)
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(827KB)], [201(12MB)]
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887221742, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 13628948, "oldest_snapshot_seqno": -1}
Jan 22 15:14:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:14:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:14:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:14:47.252 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:14:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:14:47.252 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:14:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:47.324+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 14004 keys, 10131995 bytes, temperature: kUnknown
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887451555, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 10131995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10058508, "index_size": 37342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386587, "raw_average_key_size": 27, "raw_value_size": 9823000, "raw_average_value_size": 701, "num_data_blocks": 1335, "num_entries": 14004, "num_filter_entries": 14004, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.451855) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10131995 bytes
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.455552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.3 rd, 44.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(28.0) write-amplify(12.0) OK, records in: 14727, records dropped: 723 output_compression: NoCompression
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.455582) EVENT_LOG_v1 {"time_micros": 1769094887455570, "job": 130, "event": "compaction_finished", "compaction_time_micros": 229890, "compaction_time_cpu_micros": 38006, "output_level": 6, "num_output_files": 1, "total_output_size": 10131995, "num_input_records": 14727, "num_output_records": 14004, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887456248, "job": 130, "event": "table_file_deletion", "file_number": 203}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887459351, "job": 130, "event": "table_file_deletion", "file_number": 201}
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.221641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:14:47 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:47 compute-2 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:48.302+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:48.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:48 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:48 compute-2 ceph-mon[77081]: pgmap v3230: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:49.304+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:49 compute-2 sudo[275383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:14:49 compute-2 sudo[275383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:49 compute-2 sudo[275383]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:49 compute-2 sudo[275408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:14:49 compute-2 sudo[275408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:14:49 compute-2 sudo[275408]: pam_unix(sudo:session): session closed for user root
Jan 22 15:14:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:50.265+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:50 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:14:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:14:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:50.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:51 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:51.311+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:52 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:52 compute-2 ceph-mon[77081]: pgmap v3231: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:52 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:52.314+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:52.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:53.350+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:53 compute-2 ceph-mon[77081]: pgmap v3232: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:54.306+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:14:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:14:54 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:54 compute-2 ceph-mon[77081]: pgmap v3233: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:55.345+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:56 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:56.378+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:14:57 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:57 compute-2 ceph-mon[77081]: pgmap v3234: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:57 compute-2 ceph-mon[77081]: Health check update: 122 slow ops, oldest one blocked for 5887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:14:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:57.423+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:58.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:58.387+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:14:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:14:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:14:58 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:14:58 compute-2 ceph-mon[77081]: pgmap v3235: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:14:59 compute-2 podman[275438]: 2026-01-22 15:14:59.08265192 +0000 UTC m=+0.134034212 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 15:14:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:59.385+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:14:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:00 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:00 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:00.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:00.397+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:01.370+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:01 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:01 compute-2 ceph-mon[77081]: pgmap v3236: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:02 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:02.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:02.369+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:03 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:03 compute-2 ceph-mon[77081]: pgmap v3237: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:03 compute-2 ceph-mon[77081]: Health check update: 122 slow ops, oldest one blocked for 5892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:03.404+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:04.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:04 compute-2 sudo[275466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:04 compute-2 sudo[275466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:04 compute-2 sudo[275466]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:04.396+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:04 compute-2 sudo[275491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:04 compute-2 sudo[275491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:04 compute-2 sudo[275491]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:04 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:04 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:04 compute-2 ceph-mon[77081]: pgmap v3238: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:05.412+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:05 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:06.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:06 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:15:06.537 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:15:06 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:15:06.539 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:15:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:06 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:06 compute-2 ceph-mon[77081]: pgmap v3239: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:07.405+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:07 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:08.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:08 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:15:08.541 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:15:08 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:08 compute-2 ceph-mon[77081]: pgmap v3240: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:09.410+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:09 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:10.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:10.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:10 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:10 compute-2 ceph-mon[77081]: pgmap v3241: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:11.394+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:11 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:12.411+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:12.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:12 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:12 compute-2 ceph-mon[77081]: pgmap v3242: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:12 compute-2 ceph-mon[77081]: Health check update: 122 slow ops, oldest one blocked for 5902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:13.436+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:13 compute-2 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 15:15:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:14.421+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:15 compute-2 podman[275522]: 2026-01-22 15:15:15.031745041 +0000 UTC m=+0.085030202 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 15:15:15 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:15 compute-2 ceph-mon[77081]: pgmap v3243: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:15.406+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:16 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:16 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:16 compute-2 ceph-mon[77081]: pgmap v3244: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:16.454+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:16.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:17 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:17 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:17.473+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:18 compute-2 ceph-mon[77081]: pgmap v3245: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:18 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:18.428+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:19 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:15:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3017842320' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:15:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:19.423+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:20.417+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:21.416+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:22 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:22 compute-2 ceph-mon[77081]: pgmap v3246: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:22.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:22.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:23.398+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:24.373+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:24 compute-2 ceph-mon[77081]: pgmap v3247: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:24 compute-2 sudo[275545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:24 compute-2 sudo[275545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:24 compute-2 sudo[275545]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:24 compute-2 sudo[275570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:24 compute-2 sudo[275570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:24 compute-2 sudo[275570]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:25.377+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-2 ceph-mon[77081]: pgmap v3248: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:26 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 15:15:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:26.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 15:15:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:26.343+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:27.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 ceph-mon[77081]: pgmap v3249: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:28.405+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:28 compute-2 ceph-mon[77081]: pgmap v3250: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:15:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 99K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1768 writes, 10K keys, 1768 commit groups, 1.0 writes per commit group, ingest: 16.40 MB, 0.03 MB/s
                                           Interval WAL: 1768 writes, 1768 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.6      1.39              0.39        65    0.021       0      0       0.0       0.0
                                             L6      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.8    129.6    112.0      5.51              2.12        64    0.086    652K    35K       0.0       0.0
                                            Sum      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.8    103.4    104.8      6.90              2.51       129    0.054    652K    35K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6     77.3     75.7      1.12              0.34        14    0.080    103K   5155       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    129.6    112.0      5.51              2.12        64    0.086    652K    35K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.8      1.39              0.39        64    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.104, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.71 GB write, 0.12 MB/s write, 0.70 GB read, 0.12 MB/s read, 6.9 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 75.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.00043 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3950,71.45 MB,23.5028%) FilterBlock(129,1.77 MB,0.581977%) IndexBlock(129,2.23 MB,0.735037%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:15:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:29.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:30 compute-2 podman[275598]: 2026-01-22 15:15:30.107653225 +0000 UTC m=+0.152028493 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:15:30 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:30.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:31 compute-2 ceph-mon[77081]: pgmap v3251: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:31.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:32.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:32 compute-2 ceph-mon[77081]: pgmap v3252: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:32 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:33.455+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:34 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:34.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:15:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:34.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:15:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:35.413+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:35 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:35 compute-2 ceph-mon[77081]: pgmap v3253: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:36.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:36.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:37.478+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:38 compute-2 ceph-mon[77081]: pgmap v3254: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:38.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:39.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:39 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:39 compute-2 ceph-mon[77081]: pgmap v3255: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:39 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:40.457+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-2 ceph-mon[77081]: pgmap v3256: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:40 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:41.505+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:41 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:42.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:15:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:43 compute-2 ceph-mon[77081]: pgmap v3257: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:43 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:43 compute-2 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:43.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:44.533+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:44 compute-2 sudo[275630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:44 compute-2 sudo[275630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:44 compute-2 sudo[275630]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:44 compute-2 sudo[275655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:44 compute-2 sudo[275655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:44 compute-2 sudo[275655]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:44 compute-2 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 15:15:44 compute-2 ceph-mon[77081]: pgmap v3258: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:44 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:45.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:46 compute-2 podman[275681]: 2026-01-22 15:15:46.043277415 +0000 UTC m=+0.087702989 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 15:15:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:46 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:46.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:15:47.252 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:15:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:15:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:15:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:15:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:15:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:47.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:48 compute-2 ceph-mon[77081]: pgmap v3259: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:48 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:48 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:48 compute-2 ceph-mon[77081]: Health check update: 123 slow ops, oldest one blocked for 5938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:48.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:48 compute-2 ceph-mon[77081]: pgmap v3260: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:48 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:49.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:49 compute-2 sudo[275703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:49 compute-2 sudo[275703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:49 compute-2 sudo[275703]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:49 compute-2 sudo[275728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:15:49 compute-2 sudo[275728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:49 compute-2 sudo[275728]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:49 compute-2 sudo[275753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:15:49 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:49 compute-2 sudo[275753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:49 compute-2 sudo[275753]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:49 compute-2 sudo[275778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:15:49 compute-2 sudo[275778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:15:50 compute-2 sudo[275778]: pam_unix(sudo:session): session closed for user root
Jan 22 15:15:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:50.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:50.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:50.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:50 compute-2 ceph-mon[77081]: pgmap v3261: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:50 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:51.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:51 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:15:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:52.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:15:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:52.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:15:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:52.588+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:53 compute-2 ceph-mon[77081]: pgmap v3262: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:53 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:15:53 compute-2 ceph-mon[77081]: Health check update: 123 slow ops, oldest one blocked for 5943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:15:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:15:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:53.547+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:54 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:54.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:54.523+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:54.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:55 compute-2 ceph-mon[77081]: pgmap v3263: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:55 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:55.517+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:56.476+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:56 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:57.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-2 ceph-mon[77081]: pgmap v3264: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:15:57 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:57 compute-2 ceph-mon[77081]: Health check update: 123 slow ops, oldest one blocked for 5948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:15:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:58.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:15:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:15:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:15:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:15:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:59.468+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:15:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:15:59 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:15:59 compute-2 ceph-mon[77081]: pgmap v3265: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:00 compute-2 sudo[275840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:00 compute-2 sudo[275840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:00 compute-2 sudo[275840]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:00 compute-2 sudo[275871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:16:00 compute-2 sudo[275871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:00 compute-2 sudo[275871]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:00.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:00 compute-2 podman[275864]: 2026-01-22 15:16:00.522247514 +0000 UTC m=+0.115709322 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 15:16:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:00.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:01 compute-2 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 15:16:01 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:01 compute-2 ceph-mon[77081]: pgmap v3266: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:16:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:16:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:01.435+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:02 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:02.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:03 compute-2 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 15:16:03 compute-2 ceph-mon[77081]: pgmap v3267: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:03.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:04 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:04 compute-2 ceph-mon[77081]: pgmap v3268: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:04.440+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:04 compute-2 sudo[275918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:04 compute-2 sudo[275918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:04 compute-2 sudo[275918]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:04 compute-2 sudo[275944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:04 compute-2 sudo[275944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:04 compute-2 sudo[275944]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:05.457+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:05 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:06.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:06 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:06 compute-2 ceph-mon[77081]: pgmap v3269: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:06.505+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:07.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:07 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:07 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:08.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:08.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:08.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:09 compute-2 ceph-mon[77081]: pgmap v3270: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:09 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:09.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:10 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:10.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:10.566+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:11 compute-2 ceph-mon[77081]: pgmap v3271: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:11 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:11.588+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:12.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:12 compute-2 ceph-mon[77081]: pgmap v3272: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:12 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:12.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:13 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:13 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:13.531+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:14 compute-2 ceph-mon[77081]: pgmap v3273: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:14 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:14.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:15 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:15.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:16.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:16.557+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:16 compute-2 podman[275975]: 2026-01-22 15:16:16.996122058 +0000 UTC m=+0.058274397 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:16:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:17.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:17 compute-2 ceph-mon[77081]: pgmap v3274: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:17 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:18.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:16:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:16:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:16:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:16:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:18.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:18.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-2 ceph-mon[77081]: pgmap v3275: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:18 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:16:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:16:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:19.632+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:20.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:20 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:20.607+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:21 compute-2 ceph-mon[77081]: pgmap v3276: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:21 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:21.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:22.543+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:23 compute-2 ceph-mon[77081]: pgmap v3277: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:23 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:23 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:23.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:24 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:24 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:16:24.410 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:16:24 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:16:24.411 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:16:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:24.496+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:24.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:25 compute-2 sudo[275998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:25 compute-2 sudo[275998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:25 compute-2 sudo[275998]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:25 compute-2 sudo[276023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:25 compute-2 sudo[276023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:25 compute-2 sudo[276023]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:25 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:16:25.413 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:16:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:25.508+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:25 compute-2 ceph-mon[77081]: pgmap v3278: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:25 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:26.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:26.487+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:26 compute-2 ceph-mon[77081]: pgmap v3279: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:26 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:27.460+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:27 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:27 compute-2 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:28.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:28.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:28.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:29 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:29 compute-2 ceph-mon[77081]: pgmap v3280: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:29.416+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:30.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:30.425+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:31 compute-2 podman[276051]: 2026-01-22 15:16:31.059831106 +0000 UTC m=+0.110733601 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 15:16:31 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:31 compute-2 ceph-mon[77081]: pgmap v3281: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:16:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.5 total, 600.0 interval
                                           Cumulative writes: 12K writes, 41K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 4221 syncs, 3.05 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 715 writes, 1413 keys, 715 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s
                                           Interval WAL: 715 writes, 337 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.044       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da54b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 22 15:16:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:31.407+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:16:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:32 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:32.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:32.397+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:33.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:33 compute-2 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 15:16:33 compute-2 ceph-mon[77081]: pgmap v3282: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:33.365+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:34.331+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:34.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:34 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:35.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:35.292+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:35 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:35 compute-2 ceph-mon[77081]: pgmap v3283: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:35 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:36.270+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:36 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:36 compute-2 ceph-mon[77081]: pgmap v3284: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:36.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:37.241+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:37 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:37 compute-2 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 5987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:37 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:38.221+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:16:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:39.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:39.190+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:39 compute-2 ceph-mon[77081]: pgmap v3285: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:39 compute-2 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:16:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:40.212+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:40 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:40 compute-2 ceph-mon[77081]: pgmap v3286: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:40 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:41.220+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:42 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:42.221+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:43.178+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:43 compute-2 ceph-mon[77081]: pgmap v3287: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:43 compute-2 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 5992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:43 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:44.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:45.160+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:45 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:45 compute-2 ceph-mon[77081]: pgmap v3288: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:45 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:45 compute-2 sudo[276085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:45 compute-2 sudo[276085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:45 compute-2 sudo[276085]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:45 compute-2 sudo[276110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:16:45 compute-2 sudo[276110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:16:45 compute-2 sudo[276110]: pam_unix(sudo:session): session closed for user root
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.099518) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095006099588, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1831, "num_deletes": 446, "total_data_size": 3293096, "memory_usage": 3349456, "flush_reason": "Manual Compaction"}
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Jan 22 15:16:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:46.178+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095006768728, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 2139298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 99013, "largest_seqno": 100839, "table_properties": {"data_size": 2132203, "index_size": 3524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22351, "raw_average_key_size": 22, "raw_value_size": 2115184, "raw_average_value_size": 2149, "num_data_blocks": 153, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 446, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094887, "oldest_key_time": 1769094887, "file_creation_time": 1769095006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 669407 microseconds, and 11616 cpu microseconds.
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.768926) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 2139298 bytes OK
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.769004) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.996427) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.996483) EVENT_LOG_v1 {"time_micros": 1769095006996470, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.996516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3283757, prev total WAL file size 3332115, number of live WAL files 2.
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.998457) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(2089KB)], [204(9894KB)]
Jan 22 15:16:46 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095006998504, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 12271293, "oldest_snapshot_seqno": -1}
Jan 22 15:16:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:47.136+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:16:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:16:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:16:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:16:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:16:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 14083 keys, 10401068 bytes, temperature: kUnknown
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095007809972, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 10401068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10326606, "index_size": 38125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35269, "raw_key_size": 388166, "raw_average_key_size": 27, "raw_value_size": 10089301, "raw_average_value_size": 716, "num_data_blocks": 1367, "num_entries": 14083, "num_filter_entries": 14083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:16:47 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:47 compute-2 ceph-mon[77081]: pgmap v3289: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.810231) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 10401068 bytes
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.979638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 15.1 rd, 12.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 14988, records dropped: 905 output_compression: NoCompression
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.979694) EVENT_LOG_v1 {"time_micros": 1769095007979674, "job": 132, "event": "compaction_finished", "compaction_time_micros": 811546, "compaction_time_cpu_micros": 25578, "output_level": 6, "num_output_files": 1, "total_output_size": 10401068, "num_input_records": 14988, "num_output_records": 14083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095007980302, "job": 132, "event": "table_file_deletion", "file_number": 206}
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095007982820, "job": 132, "event": "table_file_deletion", "file_number": 204}
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.998382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:16:48 compute-2 podman[276136]: 2026-01-22 15:16:48.008071214 +0000 UTC m=+0.055511135 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:16:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:48.177+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:48.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:49.184+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:49 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:49 compute-2 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 5997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:49 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:49 compute-2 ceph-mon[77081]: pgmap v3290: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:49 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:50 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:50.202+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:50.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:51 compute-2 ceph-mon[77081]: pgmap v3291: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:51 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:51.219+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:52 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:52.204+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:16:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:16:53 compute-2 ceph-mon[77081]: pgmap v3292: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:53 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:53.170+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:54.147+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:54.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:54 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:55.141+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:56.125+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:16:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:56.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:57 compute-2 ceph-mon[77081]: pgmap v3293: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:57 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:57 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:57.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:57.122+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:16:57 compute-2 ceph-mon[77081]: pgmap v3294: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:57 compute-2 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 6007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:16:57 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:16:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:58.159+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:16:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:58.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:16:58 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 15:16:58 compute-2 ceph-mon[77081]: pgmap v3295: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:16:58 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:16:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:16:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:16:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:59.128+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:16:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:16:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:16:59 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:17:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:00.147+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:17:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:00.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:00 compute-2 sudo[276162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:00 compute-2 sudo[276162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-2 sudo[276162]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:00 compute-2 sudo[276187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:17:00 compute-2 sudo[276187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-2 sudo[276187]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:00 compute-2 sudo[276212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:00 compute-2 sudo[276212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-2 sudo[276212]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:00 compute-2 sudo[276237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:17:00 compute-2 sudo[276237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:00 compute-2 ceph-mon[77081]: pgmap v3296: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 15:17:00 compute-2 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 15:17:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:01.153+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 68 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 68 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 15:17:01 compute-2 sudo[276237]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:02 compute-2 podman[276294]: 2026-01-22 15:17:02.01858838 +0000 UTC m=+0.078313463 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 15:17:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:02.144+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 15:17:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:02.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:02 compute-2 ceph-mon[77081]: 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 15:17:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:03.098+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:17:04 compute-2 ceph-mon[77081]: pgmap v3297: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 20 op/s
Jan 22 15:17:04 compute-2 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 6012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:04 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:17:04 compute-2 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:17:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:17:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:04.093+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:04.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:05 compute-2 ceph-mon[77081]: pgmap v3298: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 20 op/s
Jan 22 15:17:05 compute-2 ceph-mon[77081]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:05.059+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:05 compute-2 sudo[276323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:05 compute-2 sudo[276323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:05 compute-2 sudo[276323]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:05 compute-2 sudo[276348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:05 compute-2 sudo[276348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:05 compute-2 sudo[276348]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:06.013+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:06.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:07.038+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:07 compute-2 ceph-mon[77081]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:08.067+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-2 ceph-mon[77081]: pgmap v3299: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 597 B/s wr, 94 op/s
Jan 22 15:17:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-2 ceph-mon[77081]: Health check update: 131 slow ops, oldest one blocked for 6018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:08 compute-2 ceph-mon[77081]: pgmap v3300: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 597 B/s wr, 94 op/s
Jan 22 15:17:08 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:08.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:09.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:09 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:10.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:10.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:10.977+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:11 compute-2 ceph-mon[77081]: pgmap v3301: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 852 B/s wr, 116 op/s
Jan 22 15:17:11 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:11 compute-2 sudo[276376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:11 compute-2 sudo[276376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:11 compute-2 sudo[276376]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:11 compute-2 sudo[276401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:17:11 compute-2 sudo[276401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:11 compute-2 sudo[276401]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:11.937+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:12 compute-2 ceph-mon[77081]: pgmap v3302: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 852 B/s wr, 151 op/s
Jan 22 15:17:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:17:12 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:12.890+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:13.883+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:14 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:14 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:14.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:15 compute-2 ceph-mon[77081]: pgmap v3303: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 682 B/s wr, 131 op/s
Jan 22 15:17:15 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:15.831+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:16.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:16 compute-2 ceph-mon[77081]: pgmap v3304: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 682 B/s wr, 175 op/s
Jan 22 15:17:16 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:16.788+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:17.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:17.817+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:18 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:18.784+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:19 compute-2 podman[276430]: 2026-01-22 15:17:19.030382578 +0000 UTC m=+0.084222717 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:17:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:19.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:19.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:19 compute-2 ceph-mon[77081]: pgmap v3305: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 255 B/s wr, 101 op/s
Jan 22 15:17:19 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:17:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/243300702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:17:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:20.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:20.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:21.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:21 compute-2 ceph-mon[77081]: pgmap v3306: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 255 B/s wr, 101 op/s
Jan 22 15:17:21 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:21.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:22.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:22.774+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:23 compute-2 ceph-mon[77081]: pgmap v3307: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 22 15:17:23 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:23 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:23.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:24.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:24.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:25 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:17:25.022 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:17:25 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:17:25.024 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:17:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:25 compute-2 ceph-mon[77081]: pgmap v3308: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 22 15:17:25 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:25 compute-2 sudo[276453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:25 compute-2 sudo[276453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:25 compute-2 sudo[276453]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:25 compute-2 sudo[276478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:25 compute-2 sudo[276478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:25 compute-2 sudo[276478]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:25.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:26.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:27 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:27.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:28 compute-2 ceph-mon[77081]: pgmap v3309: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 22 15:17:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:28 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:28 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:28.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:28.809+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:29.798+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:30 compute-2 ceph-mon[77081]: pgmap v3310: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:30.823+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:31.794+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:32 compute-2 ceph-mon[77081]: pgmap v3311: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:32.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:33 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:17:33.027 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:17:33 compute-2 podman[276507]: 2026-01-22 15:17:33.072975501 +0000 UTC m=+0.119905402 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:17:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:33.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:34 compute-2 ceph-mon[77081]: pgmap v3312: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:34 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:34 compute-2 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:34.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:35.765+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:17:36 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:36 compute-2 ceph-mon[77081]: pgmap v3313: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:36.768+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:17:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:37.741+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:17:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:37 compute-2 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 15:17:37 compute-2 ceph-mon[77081]: pgmap v3314: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:37 compute-2 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:17:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:38.754+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:38 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:17:38 compute-2 ceph-mon[77081]: pgmap v3315: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:39.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:39 compute-2 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 15:17:39 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:40.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:40.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:41.671+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:42 compute-2 ceph-mon[77081]: pgmap v3316: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:42.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:42.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:43 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-2 ceph-mon[77081]: pgmap v3317: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:43 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:43 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:43.634+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:44.646+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:44 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:44 compute-2 ceph-mon[77081]: pgmap v3318: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:45.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:45 compute-2 sudo[276540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:45 compute-2 sudo[276540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:45 compute-2 sudo[276540]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:45.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:45 compute-2 sudo[276565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:17:45 compute-2 sudo[276565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:17:45 compute-2 sudo[276565]: pam_unix(sudo:session): session closed for user root
Jan 22 15:17:46 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:46 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:46.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:46.698+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:17:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:17:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:17:47.254 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:17:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:17:47.254 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:17:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:47.656+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:47 compute-2 ceph-mon[77081]: pgmap v3319: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:48.669+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:49.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:49 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:49 compute-2 ceph-mon[77081]: pgmap v3320: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:49 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:49.669+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:49 compute-2 podman[276592]: 2026-01-22 15:17:49.990129071 +0000 UTC m=+0.053242236 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:17:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:50.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:50.672+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:50 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:51.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:51.663+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:52.664+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:53.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:53.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:53 compute-2 ceph-mon[77081]: pgmap v3321: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:53 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:54.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:54 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-2 ceph-mon[77081]: pgmap v3322: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:54 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-2 ceph-mon[77081]: pgmap v3323: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:54 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:54 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:17:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:55.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:55.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:55 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:56.733+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:17:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:57.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:17:57 compute-2 ceph-mon[77081]: pgmap v3324: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:57 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:17:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:57.700+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:17:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:17:58 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:58 compute-2 ceph-mon[77081]: pgmap v3325: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:17:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:58.689+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:17:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:17:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:59.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:17:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:59.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:17:59 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:17:59 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:00.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:00.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:01 compute-2 ceph-mon[77081]: pgmap v3326: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:01 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:01.668+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:02 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:02.698+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:03 compute-2 ceph-mon[77081]: pgmap v3327: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:03 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:03 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:03.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:04 compute-2 podman[276618]: 2026-01-22 15:18:04.083265251 +0000 UTC m=+0.130994493 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 15:18:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:04.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:05 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:05 compute-2 ceph-mon[77081]: pgmap v3328: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:05 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:05.786+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:05 compute-2 sudo[276646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:05 compute-2 sudo[276646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:05 compute-2 sudo[276646]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:05 compute-2 sudo[276671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:05 compute-2 sudo[276671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:05 compute-2 sudo[276671]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:06.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:06 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:06 compute-2 ceph-mon[77081]: pgmap v3329: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:06 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:06.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:07.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:07 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:07 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:08.791+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:08 compute-2 ceph-mon[77081]: pgmap v3330: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:08 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:09.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:09.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:09 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:10.771+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:10 compute-2 ceph-mon[77081]: pgmap v3331: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:10 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:11.739+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:11 compute-2 sudo[276700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:11 compute-2 sudo[276700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:11 compute-2 sudo[276700]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:12 compute-2 sudo[276725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:18:12 compute-2 sudo[276725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:12 compute-2 sudo[276725]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-2 sudo[276750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:12 compute-2 sudo[276750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:12 compute-2 sudo[276750]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-2 sudo[276775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:18:12 compute-2 sudo[276775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:12.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:12 compute-2 sudo[276775]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:12.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:13 compute-2 ceph-mon[77081]: pgmap v3332: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:13 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:13 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:13.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:14 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:18:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:18:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:14.671+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:15.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:15 compute-2 ceph-mon[77081]: pgmap v3333: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:15 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:15.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:16 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:16 compute-2 ceph-mon[77081]: pgmap v3334: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:16 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:16.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:16.707+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:17 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:17 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:17.698+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:18 compute-2 ceph-mon[77081]: pgmap v3335: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:18 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:18.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:18.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:19.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/433821131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:18:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/433821131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:18:19 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:19.694+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:20.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:20 compute-2 sudo[276835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:20 compute-2 sudo[276835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:20 compute-2 sudo[276835]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:20.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:20 compute-2 sudo[276866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:18:20 compute-2 podman[276859]: 2026-01-22 15:18:20.747399806 +0000 UTC m=+0.056288526 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 15:18:20 compute-2 sudo[276866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:20 compute-2 sudo[276866]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:21 compute-2 ceph-mon[77081]: pgmap v3336: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:21 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:18:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:21.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:21.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:22.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:22.688+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:23.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:23 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:23 compute-2 ceph-mon[77081]: pgmap v3337: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:23 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:23.642+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:24.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:25 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:25 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-2 ceph-mon[77081]: pgmap v3338: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:25 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:25.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:25.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:25 compute-2 sudo[276907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:25 compute-2 sudo[276907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:25 compute-2 sudo[276907]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:26 compute-2 sudo[276932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:26 compute-2 sudo[276932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:26 compute-2 sudo[276932]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:26 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:26.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:27.632+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:27 compute-2 ceph-mon[77081]: pgmap v3339: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:27 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:28 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:18:28.012 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:18:28 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:18:28.013 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:18:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:28 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:28 compute-2 ceph-mon[77081]: pgmap v3340: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:28 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:28.681+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:29.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:29 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:29.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:30.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:30 compute-2 ceph-mon[77081]: pgmap v3341: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:30 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:31.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:31.696+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:32 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:32.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:33 compute-2 ceph-mon[77081]: pgmap v3342: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:33 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:33 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:33.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:34 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:34.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:34 compute-2 podman[276963]: 2026-01-22 15:18:34.998848893 +0000 UTC m=+0.065241580 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 15:18:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:35 compute-2 ceph-mon[77081]: pgmap v3343: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:35 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:35.707+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:36 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:36.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:37 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:18:37.017 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:18:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:37.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:37 compute-2 ceph-mon[77081]: pgmap v3344: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:37 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:37.762+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:38 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:38 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:38.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:39 compute-2 ceph-mon[77081]: pgmap v3345: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:39 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:39.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:40 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:40 compute-2 ceph-mon[77081]: pgmap v3346: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:40 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:40.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:41.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:41.858+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:42 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:42.903+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:43.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:43.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:43 compute-2 ceph-mon[77081]: pgmap v3347: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:43 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:43 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:43.937+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:44 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:44 compute-2 ceph-mon[77081]: pgmap v3348: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:44 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:44.962+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:45.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:45 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:45.915+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:46 compute-2 sudo[276994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:46 compute-2 sudo[276994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:46 compute-2 sudo[276994]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:46 compute-2 sudo[277019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:18:46 compute-2 sudo[277019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:18:46 compute-2 sudo[277019]: pam_unix(sudo:session): session closed for user root
Jan 22 15:18:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:46.903+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:47 compute-2 ceph-mon[77081]: pgmap v3349: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:47 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:18:47.254 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:18:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:18:47.255 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:18:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:18:47.255 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:18:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:47.931+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.074149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128074173, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1889, "num_deletes": 459, "total_data_size": 3515101, "memory_usage": 3585424, "flush_reason": "Manual Compaction"}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128096614, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 2286898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100844, "largest_seqno": 102728, "table_properties": {"data_size": 2279320, "index_size": 3943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23062, "raw_average_key_size": 22, "raw_value_size": 2261434, "raw_average_value_size": 2217, "num_data_blocks": 171, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 459, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095006, "oldest_key_time": 1769095006, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 22532 microseconds, and 5454 cpu microseconds.
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.096676) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 2286898 bytes OK
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.096695) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.098633) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.098647) EVENT_LOG_v1 {"time_micros": 1769095128098643, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.098663) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 3505427, prev total WAL file size 3505691, number of live WAL files 2.
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.100693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034373833' seq:72057594037927935, type:22 .. '6C6F676D0035303335' seq:0, type:0; will stop at (end)
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(2233KB)], [207(10157KB)]
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128100762, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 12687966, "oldest_snapshot_seqno": -1}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 14170 keys, 12485228 bytes, temperature: kUnknown
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128200425, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12485228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12407607, "index_size": 41092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390167, "raw_average_key_size": 27, "raw_value_size": 12166067, "raw_average_value_size": 858, "num_data_blocks": 1492, "num_entries": 14170, "num_filter_entries": 14170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.200701) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12485228 bytes
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202414) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.2 rd, 125.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 15103, records dropped: 933 output_compression: NoCompression
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202486) EVENT_LOG_v1 {"time_micros": 1769095128202461, "job": 134, "event": "compaction_finished", "compaction_time_micros": 99768, "compaction_time_cpu_micros": 34508, "output_level": 6, "num_output_files": 1, "total_output_size": 12485228, "num_input_records": 15103, "num_output_records": 14170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128203294, "job": 134, "event": "table_file_deletion", "file_number": 209}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128206751, "job": 134, "event": "table_file_deletion", "file_number": 207}
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.100518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:18:48 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:48 compute-2 ceph-mon[77081]: pgmap v3350: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:48.902+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:49.860+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:50 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:50.902+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:51 compute-2 podman[277047]: 2026-01-22 15:18:51.011055846 +0000 UTC m=+0.068089674 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 15:18:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:51.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:51 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:51 compute-2 ceph-mon[77081]: pgmap v3351: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:51 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:18:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:18:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:51.923+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:52 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:52.921+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:53.944+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:54 compute-2 ceph-mon[77081]: pgmap v3352: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:54 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:54 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:54 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:54.924+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:55.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:55.907+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:56 compute-2 ceph-mon[77081]: pgmap v3353: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:56 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:56.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:57 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:57 compute-2 ceph-mon[77081]: pgmap v3354: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:57.850+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:58 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:58 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:58 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:18:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:58.824+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:18:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:18:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:18:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:18:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:18:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:18:59 compute-2 ceph-mon[77081]: pgmap v3355: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:18:59 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:18:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:18:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:59.869+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:00 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:00 compute-2 ceph-mon[77081]: pgmap v3356: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:00 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:00.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:01.885+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:02 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:02.860+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:03 compute-2 ceph-mon[77081]: pgmap v3357: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:03 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:03 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:03.846+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:04 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:04 compute-2 ceph-mon[77081]: pgmap v3358: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:04 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:04.806+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:05.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:05 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:06 compute-2 podman[277074]: 2026-01-22 15:19:06.075540091 +0000 UTC m=+0.132834041 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 15:19:06 compute-2 sudo[277099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:06 compute-2 sudo[277099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:06 compute-2 sudo[277099]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:06 compute-2 sudo[277124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:06 compute-2 sudo[277124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:06 compute-2 sudo[277124]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:06.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:06 compute-2 ceph-mon[77081]: pgmap v3359: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:06 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:07.868+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:08.909+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:09 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:09 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:09.904+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:10 compute-2 ceph-mon[77081]: pgmap v3360: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:10 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:10 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:10.898+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:11 compute-2 ceph-mon[77081]: pgmap v3361: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:11 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:11 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:11.887+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:12 compute-2 ceph-mon[77081]: pgmap v3362: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:12 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:12.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:13 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:13 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:13.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:14 compute-2 ceph-mon[77081]: pgmap v3363: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:14 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:14.890+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:15.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:15.853+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:15 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:16.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:16 compute-2 ceph-mon[77081]: pgmap v3364: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:16 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:17.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:17.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:17.809+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:18 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:18 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:18.790+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:19 compute-2 ceph-mon[77081]: pgmap v3365: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:19 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/511152699' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:19:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/511152699' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:19:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:19.833+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:20 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:20 compute-2 sudo[277156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:20 compute-2 sudo[277156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:20 compute-2 sudo[277156]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:20.873+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:20 compute-2 sudo[277181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:19:20 compute-2 sudo[277181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:20 compute-2 sudo[277181]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-2 sudo[277207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:21 compute-2 sudo[277207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:21 compute-2 sudo[277207]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:21.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:21 compute-2 sudo[277233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:19:21 compute-2 sudo[277233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:21 compute-2 podman[277231]: 2026-01-22 15:19:21.175045775 +0000 UTC m=+0.083621132 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:19:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:21 compute-2 sudo[277233]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:21 compute-2 ceph-mon[77081]: pgmap v3366: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:21 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:21 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:21.923+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:22.911+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:22 compute-2 ceph-mon[77081]: pgmap v3367: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:22 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:19:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:19:22 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:23.916+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:24.871+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:25 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:25.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:25.847+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:26 compute-2 ceph-mon[77081]: pgmap v3368: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:26 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:26 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:26 compute-2 sudo[277308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:26 compute-2 sudo[277308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:26 compute-2 sudo[277308]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:26 compute-2 sudo[277333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:26 compute-2 sudo[277333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:26 compute-2 sudo[277333]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:26.881+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:27.886+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:27 compute-2 ceph-mon[77081]: pgmap v3369: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:27 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:28.871+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:29 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:29 compute-2 ceph-mon[77081]: pgmap v3370: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:29 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:29 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:29.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:29.914+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:30 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:30.876+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:31 compute-2 sudo[277361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:31 compute-2 sudo[277361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:31 compute-2 sudo[277361]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:31 compute-2 sudo[277386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:19:31 compute-2 sudo[277386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:31 compute-2 sudo[277386]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:31 compute-2 ceph-mon[77081]: pgmap v3371: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:31 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:31 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:19:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:31.927+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:32.932+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:33 compute-2 ceph-mon[77081]: pgmap v3372: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:33 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:33 compute-2 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:33.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:33.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:33.892+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:34 compute-2 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:19:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:34.922+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:35.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:35 compute-2 ceph-mon[77081]: pgmap v3373: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:35 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:35.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:35.894+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:36 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:36.845+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:37 compute-2 podman[277414]: 2026-01-22 15:19:37.014111267 +0000 UTC m=+0.074867930 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 22 15:19:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:37 compute-2 ceph-mon[77081]: pgmap v3374: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:37 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:37.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:37.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:38 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:38 compute-2 ceph-mon[77081]: pgmap v3375: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:38 compute-2 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:38 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:38.876+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 15:19:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 15:19:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:39.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:40 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:40.880+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:41.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:41 compute-2 ceph-mon[77081]: pgmap v3376: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:41 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:41.841+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:42 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:42 compute-2 ceph-mon[77081]: pgmap v3377: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:42 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:42.875+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:43 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:43.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:44 compute-2 ceph-mon[77081]: pgmap v3378: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:44 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:44.841+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:45.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:45.855+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:46 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:46 compute-2 sudo[277445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:46 compute-2 sudo[277445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:46 compute-2 sudo[277445]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:46 compute-2 sudo[277470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:19:46 compute-2 sudo[277470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:19:46 compute-2 sudo[277470]: pam_unix(sudo:session): session closed for user root
Jan 22 15:19:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:46.886+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:19:47.256 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:19:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:19:47.256 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:19:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:19:47.257 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:19:47 compute-2 ceph-mon[77081]: pgmap v3379: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:47 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:47 compute-2 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:47.930+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:48 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:48 compute-2 ceph-mon[77081]: pgmap v3380: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:48.896+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:49.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:49.876+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:49 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:49 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:50.857+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:51 compute-2 ceph-mon[77081]: pgmap v3381: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:51 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:51.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:51.834+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:51 compute-2 podman[277498]: 2026-01-22 15:19:51.997742907 +0000 UTC m=+0.057867809 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:19:52 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:52 compute-2 ceph-mon[77081]: pgmap v3382: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:52.806+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:53.802+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:53 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:53 compute-2 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:53 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:54.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.223983) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195224066, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1180, "num_deletes": 362, "total_data_size": 1970507, "memory_usage": 1996432, "flush_reason": "Manual Compaction"}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195238270, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 1293702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102734, "largest_seqno": 103908, "table_properties": {"data_size": 1288682, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15155, "raw_average_key_size": 22, "raw_value_size": 1277050, "raw_average_value_size": 1864, "num_data_blocks": 95, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 362, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095128, "oldest_key_time": 1769095128, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 14401 microseconds, and 8316 cpu microseconds.
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.238390) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 1293702 bytes OK
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.238421) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.240530) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.240549) EVENT_LOG_v1 {"time_micros": 1769095195240542, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.240571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1964213, prev total WAL file size 1964213, number of live WAL files 2.
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.241671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(1263KB)], [210(11MB)]
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195241716, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 13778930, "oldest_snapshot_seqno": -1}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 14116 keys, 12015602 bytes, temperature: kUnknown
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195331669, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 12015602, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11938470, "index_size": 40731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35333, "raw_key_size": 389314, "raw_average_key_size": 27, "raw_value_size": 11697975, "raw_average_value_size": 828, "num_data_blocks": 1475, "num_entries": 14116, "num_filter_entries": 14116, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332232) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12015602 bytes
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.334484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.8 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.9) write-amplify(9.3) OK, records in: 14855, records dropped: 739 output_compression: NoCompression
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.334521) EVENT_LOG_v1 {"time_micros": 1769095195334504, "job": 136, "event": "compaction_finished", "compaction_time_micros": 90183, "compaction_time_cpu_micros": 29430, "output_level": 6, "num_output_files": 1, "total_output_size": 12015602, "num_input_records": 14855, "num_output_records": 14116, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195335697, "job": 136, "event": "table_file_deletion", "file_number": 212}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195341241, "job": 136, "event": "table_file_deletion", "file_number": 210}
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.241620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:19:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:55 compute-2 ceph-mon[77081]: pgmap v3383: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:55 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:55.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:56 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:56 compute-2 ceph-mon[77081]: pgmap v3384: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:56 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:56.761+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:19:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:57.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:19:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:57.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:58 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:58.686+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:59 compute-2 ceph-mon[77081]: pgmap v3385: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:19:59 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:19:59 compute-2 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:19:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:19:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:59.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:19:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:19:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:19:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:19:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:19:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:59.722+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:19:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:00 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 15:20:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 15:20:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:00.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:01 compute-2 ceph-mon[77081]: pgmap v3386: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:01 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:01 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:01.735+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:02 compute-2 ceph-mon[77081]: pgmap v3387: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:02 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:02.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:03.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:03.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:03.726+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:03 compute-2 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 15:20:03 compute-2 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:04.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:05.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:05 compute-2 ceph-mon[77081]: pgmap v3388: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:05 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:05.713+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:06 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:06.700+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:06 compute-2 sudo[277524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:06 compute-2 sudo[277524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:06 compute-2 sudo[277524]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:06 compute-2 sudo[277549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:06 compute-2 sudo[277549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:06 compute-2 sudo[277549]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:07.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:07.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:07 compute-2 ceph-mon[77081]: pgmap v3389: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:07 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:07.728+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:08 compute-2 podman[277575]: 2026-01-22 15:20:08.050161948 +0000 UTC m=+0.102321729 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 15:20:08 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:08 compute-2 ceph-mon[77081]: pgmap v3390: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:08 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:08 compute-2 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:08.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:09.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:09.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:09.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:10 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:10 compute-2 ceph-mon[77081]: pgmap v3391: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:10.785+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:11.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:11 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:11 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:11.768+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:12.736+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:13.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:13 compute-2 ceph-mon[77081]: pgmap v3392: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:13 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:13 compute-2 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:13.778+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:14.771+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:15 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:15 compute-2 ceph-mon[77081]: pgmap v3393: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:15 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:15.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:15.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:15.735+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:16 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:16.719+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:17.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:17.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:17 compute-2 ceph-mon[77081]: pgmap v3394: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:17 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:17 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:18.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:19 compute-2 ceph-mon[77081]: pgmap v3395: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:19 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:19 compute-2 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/796340529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:20:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/796340529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:20:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:19.644+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:20.685+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:21.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:21 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:21 compute-2 ceph-mon[77081]: pgmap v3396: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:21 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:21.638+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:22.630+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:23 compute-2 podman[277609]: 2026-01-22 15:20:23.026850113 +0000 UTC m=+0.083014887 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible)
Jan 22 15:20:23 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:23 compute-2 ceph-mon[77081]: pgmap v3397: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:23 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:23.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:23.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:24 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:24.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:25.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:25 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:25 compute-2 ceph-mon[77081]: pgmap v3398: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:25.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:26.584+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:26 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:26 compute-2 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:20:26 compute-2 ceph-mon[77081]: pgmap v3399: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:26 compute-2 sudo[277629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:26 compute-2 sudo[277629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:26 compute-2 sudo[277629]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:26 compute-2 sudo[277655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:26 compute-2 sudo[277655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:26 compute-2 sudo[277655]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:27.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:27.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:27.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:27 compute-2 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:27 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:28.573+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:29 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:29 compute-2 ceph-mon[77081]: pgmap v3400: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:29.591+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:30 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:30.610+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:31.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:31 compute-2 sudo[277682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:31 compute-2 sudo[277682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-2 sudo[277682]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-2 sudo[277707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:20:31 compute-2 sudo[277707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-2 sudo[277707]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-2 ceph-mon[77081]: pgmap v3401: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:31 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-2 sudo[277732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:31 compute-2 sudo[277732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-2 sudo[277732]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-2 sudo[277757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:20:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:31.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:31 compute-2 sudo[277757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-2 sudo[277757]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-2 sudo[277802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:31 compute-2 sudo[277802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-2 sudo[277802]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-2 sudo[277827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:20:31 compute-2 sudo[277827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:31 compute-2 sudo[277827]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:31 compute-2 sudo[277852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:32 compute-2 sudo[277852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:32 compute-2 sudo[277852]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:32 compute-2 sudo[277877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:20:32 compute-2 sudo[277877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:32 compute-2 podman[277974]: 2026-01-22 15:20:32.496823871 +0000 UTC m=+0.056249225 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 15:20:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:32.542+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:32 compute-2 podman[277994]: 2026-01-22 15:20:32.651588812 +0000 UTC m=+0.059793749 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:20:32 compute-2 podman[277974]: 2026-01-22 15:20:32.657510259 +0000 UTC m=+0.216935613 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 15:20:32 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:32 compute-2 ceph-mon[77081]: pgmap v3402: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:20:32 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:20:32 compute-2 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 15:20:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 15:20:33 compute-2 podman[278127]: 2026-01-22 15:20:33.343239265 +0000 UTC m=+0.058258429 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:20:33 compute-2 podman[278127]: 2026-01-22 15:20:33.353955719 +0000 UTC m=+0.068974863 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:20:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:33.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:33 compute-2 podman[278194]: 2026-01-22 15:20:33.57422469 +0000 UTC m=+0.058557946 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Jan 22 15:20:33 compute-2 podman[278194]: 2026-01-22 15:20:33.588849369 +0000 UTC m=+0.073182585 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Jan 22 15:20:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:33.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:33 compute-2 sudo[277877]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:33 compute-2 sudo[278227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:33 compute-2 sudo[278227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:33 compute-2 sudo[278227]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:33 compute-2 sudo[278252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:20:33 compute-2 sudo[278252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:33 compute-2 sudo[278252]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:33 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:33 compute-2 sudo[278277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:33 compute-2 sudo[278277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:33 compute-2 sudo[278277]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:33 compute-2 sudo[278302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:20:33 compute-2 sudo[278302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:34 compute-2 sudo[278302]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:34.618+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:34 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:34 compute-2 ceph-mon[77081]: pgmap v3403: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:20:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:20:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:20:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:20:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:20:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:35.654+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:36 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:36.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:37 compute-2 ceph-mon[77081]: pgmap v3404: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:37 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:37.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:37.634+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:38.592+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:38 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-2 podman[278362]: 2026-01-22 15:20:39.085280129 +0000 UTC m=+0.132423699 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 22 15:20:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:39.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-2 ceph-mon[77081]: pgmap v3405: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:39 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:39 compute-2 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:40.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:41.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:41 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:41 compute-2 ceph-mon[77081]: pgmap v3406: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:41.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:42.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:43.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:43.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:43.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:44.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:45.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:45.495+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:46 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:46 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:46 compute-2 ceph-mon[77081]: pgmap v3407: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:46.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 sudo[278393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:47 compute-2 sudo[278393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:47 compute-2 sudo[278393]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:47 compute-2 sudo[278418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:47 compute-2 sudo[278418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:47 compute-2 sudo[278418]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:47.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:20:47.257 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:20:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:20:47.258 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:20:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:20:47.258 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:20:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:47.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 ceph-mon[77081]: pgmap v3408: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:47 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 ceph-mon[77081]: pgmap v3409: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:47 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:47 compute-2 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:48.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:49.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:49 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:49 compute-2 ceph-mon[77081]: pgmap v3410: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:49.571+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:50.551+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:51 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:51 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:51 compute-2 ceph-mon[77081]: pgmap v3411: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:20:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:51.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:20:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:51.575+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:52 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:52 compute-2 ceph-mon[77081]: pgmap v3412: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:52 compute-2 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:52.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:53.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:20:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:20:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:53.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:53 compute-2 podman[278446]: 2026-01-22 15:20:53.989285873 +0000 UTC m=+0.051702984 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:20:54 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:54 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:54.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:55.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:55 compute-2 sudo[278467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:20:55 compute-2 sudo[278467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:55 compute-2 sudo[278467]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:55 compute-2 sudo[278492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:20:55 compute-2 sudo[278492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:20:55 compute-2 sudo[278492]: pam_unix(sudo:session): session closed for user root
Jan 22 15:20:55 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:55 compute-2 ceph-mon[77081]: pgmap v3413: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:20:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:55.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:56.548+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:56 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:56 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:56 compute-2 ceph-mon[77081]: pgmap v3414: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:20:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:57.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:57.543+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:20:58 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:20:58 compute-2 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:20:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:58.571+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:20:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:20:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:20:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:20:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:20:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:59.598+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:20:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:00 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:00 compute-2 ceph-mon[77081]: pgmap v3415: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:00 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:00.631+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:01 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:01 compute-2 ceph-mon[77081]: pgmap v3416: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:01.585+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:02 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:02 compute-2 ceph-mon[77081]: pgmap v3417: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:02 compute-2 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:02.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:03.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:03.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:03.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:03 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:03 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:04 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:04 compute-2 ceph-mon[77081]: pgmap v3418: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:04.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:05.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:05.562+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:05 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:06.591+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:06 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:06 compute-2 ceph-mon[77081]: pgmap v3419: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:07 compute-2 sudo[278523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:07 compute-2 sudo[278523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:07 compute-2 sudo[278523]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:07.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:07 compute-2 sudo[278548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:07 compute-2 sudo[278548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:07 compute-2 sudo[278548]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:07.627+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:07 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:07 compute-2 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 15:21:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:08.660+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:08 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:08 compute-2 ceph-mon[77081]: pgmap v3420: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:09.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:09.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:09.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:10 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:10 compute-2 podman[278574]: 2026-01-22 15:21:10.085533267 +0000 UTC m=+0.133575900 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 15:21:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:10.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:11 compute-2 ceph-mon[77081]: pgmap v3421: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:11 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:11.756+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:12 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:12.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:13.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:13 compute-2 ceph-mon[77081]: pgmap v3422: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:13 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:13 compute-2 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:13 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:13.741+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:14.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:14 compute-2 ceph-mon[77081]: pgmap v3423: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:14 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:15.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:15.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:15.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:16 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:16.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:17.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:17 compute-2 ceph-mon[77081]: pgmap v3424: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:17 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:17 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:17.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:17.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:18 compute-2 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:18 compute-2 ceph-mon[77081]: pgmap v3425: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:18 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:21:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:21:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:21:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:21:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:18.723+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:19.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:19.715+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:21:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:21:19 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:20.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:20 compute-2 ceph-mon[77081]: pgmap v3426: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:20 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:21.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:21.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:21.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:21 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:22.755+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:22 compute-2 ceph-mon[77081]: pgmap v3427: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:22 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:22 compute-2 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:23.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:24 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:24.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:25 compute-2 podman[278609]: 2026-01-22 15:21:25.027485931 +0000 UTC m=+0.082068721 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:21:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:25.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:25 compute-2 ceph-mon[77081]: pgmap v3428: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:25 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:25.726+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:26.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:27 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:27 compute-2 ceph-mon[77081]: pgmap v3429: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:27 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:27 compute-2 sudo[278629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:27 compute-2 sudo[278629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:27 compute-2 sudo[278629]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:27 compute-2 sudo[278654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:27 compute-2 sudo[278654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:27 compute-2 sudo[278654]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:27.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:28 compute-2 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 15:21:28 compute-2 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:28.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:29 compute-2 ceph-mon[77081]: pgmap v3430: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:29 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:29.668+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:30.681+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:30 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:30 compute-2 ceph-mon[77081]: pgmap v3431: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:30 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:31.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:32 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:32.677+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:33 compute-2 ceph-mon[77081]: pgmap v3432: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:33 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:33 compute-2 ceph-mon[77081]: Health check update: 140 slow ops, oldest one blocked for 6283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:33.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:34 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:34.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:35 compute-2 ceph-mon[77081]: pgmap v3433: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:35 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:35.713+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:36 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:36.695+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:37 compute-2 ceph-mon[77081]: pgmap v3434: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:37 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:37.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:37.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:38 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:38 compute-2 ceph-mon[77081]: Health check update: 140 slow ops, oldest one blocked for 6288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:38 compute-2 ceph-mon[77081]: pgmap v3435: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:38.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:39.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:40 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:40 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 15:21:40 compute-2 ceph-mon[77081]: pgmap v3436: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:40 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:40.761+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:41 compute-2 podman[278686]: 2026-01-22 15:21:41.018140039 +0000 UTC m=+0.080207462 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:21:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:41.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:41.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:41.753+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:41 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:42.742+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:43 compute-2 ceph-mon[77081]: pgmap v3437: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:43 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:43 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:43.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:43.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:44 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:44.769+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:45 compute-2 ceph-mon[77081]: pgmap v3438: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:45 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:45.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:45.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:46 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:46 compute-2 ceph-mon[77081]: pgmap v3439: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:46.822+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:21:47.259 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:21:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:21:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:21:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:21:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:21:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:47.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:47 compute-2 sudo[278715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:47 compute-2 sudo[278715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:47 compute-2 sudo[278715]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:47 compute-2 sudo[278740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:47 compute-2 sudo[278740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:47 compute-2 sudo[278740]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:47 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:47 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:47 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:47.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:48 compute-2 ceph-mon[77081]: pgmap v3440: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:48 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:48.794+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:49.761+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:49 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:50.739+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:50 compute-2 ceph-mon[77081]: pgmap v3441: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:50 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:51.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:52 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:52.773+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:53 compute-2 ceph-mon[77081]: pgmap v3442: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:53 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:53 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:53.802+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:54 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:54.764+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:55 compute-2 ceph-mon[77081]: pgmap v3443: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:55 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:55 compute-2 sudo[278769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:55 compute-2 sudo[278769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-2 sudo[278769]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:55 compute-2 sudo[278800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:21:55 compute-2 sudo[278800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-2 podman[278793]: 2026-01-22 15:21:55.579214523 +0000 UTC m=+0.080779556 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:21:55 compute-2 sudo[278800]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:21:55.602 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:21:55 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:21:55.602 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:21:55 compute-2 sudo[278838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:21:55 compute-2 sudo[278838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-2 sudo[278838]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:55 compute-2 sudo[278863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:21:55 compute-2 sudo[278863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:21:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:55.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:56 compute-2 sudo[278863]: pam_unix(sudo:session): session closed for user root
Jan 22 15:21:56 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:56 compute-2 ceph-mon[77081]: pgmap v3444: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:21:56 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:21:56 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:21:56.604 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:21:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:56.799+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:21:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:21:57 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:21:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:21:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:21:57 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:21:57 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:57.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:58 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:21:58 compute-2 ceph-mon[77081]: pgmap v3445: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:21:58 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:58.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:21:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:21:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:59.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:21:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:21:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:21:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:59.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:21:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:21:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:59.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:21:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:00 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:00.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:01 compute-2 ceph-mon[77081]: pgmap v3446: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:01 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:01.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:01.777+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:02 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:02 compute-2 ceph-mon[77081]: pgmap v3447: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:02.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:03 compute-2 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 15:22:03 compute-2 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:03.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:03.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:04.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:04 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:04 compute-2 ceph-mon[77081]: pgmap v3448: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:04 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:05.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:05.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:05.740+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:06 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:06 compute-2 ceph-mon[77081]: pgmap v3449: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:06.743+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:07 compute-2 sudo[278925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:07 compute-2 sudo[278925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:07 compute-2 sudo[278925]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:07.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:07 compute-2 sudo[278950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:22:07 compute-2 sudo[278950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:07 compute-2 sudo[278950]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:07 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:07 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:22:07 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:22:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:07.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:07 compute-2 sudo[278975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:07 compute-2 sudo[278975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:07 compute-2 sudo[278975]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:07.730+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:07 compute-2 sudo[279000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:07 compute-2 sudo[279000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:07 compute-2 sudo[279000]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:08 compute-2 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:08 compute-2 ceph-mon[77081]: pgmap v3450: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:08 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:08.780+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:09.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:09 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:10.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:11 compute-2 ceph-mon[77081]: pgmap v3451: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:11 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:11.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:12 compute-2 podman[279027]: 2026-01-22 15:22:12.090423288 +0000 UTC m=+0.138592913 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 15:22:12 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:12.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:13.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:13 compute-2 ceph-mon[77081]: pgmap v3452: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:13 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:13 compute-2 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:13.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:14 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:14 compute-2 ceph-mon[77081]: pgmap v3453: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:14 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:14.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:15.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:15.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:16 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:16.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:17.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:17 compute-2 ceph-mon[77081]: pgmap v3454: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:17 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 15:22:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 15:22:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:17.826+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:18 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:18 compute-2 ceph-mon[77081]: pgmap v3455: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:18 compute-2 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:18 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:18.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:22:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3481393543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:22:19 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:19.733+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:20.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:20 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:22:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:20 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:22:21 compute-2 ceph-mon[77081]: pgmap v3456: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:21 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:21.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:22 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:22.746+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:23 compute-2 ceph-mon[77081]: pgmap v3457: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:23 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:23 compute-2 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:23.709+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:24 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:24 compute-2 ceph-mon[77081]: pgmap v3458: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:24 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:24.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:25.724+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:25 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:25 compute-2 podman[279063]: 2026-01-22 15:22:25.992190199 +0000 UTC m=+0.054384906 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:22:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:26.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:27 compute-2 ceph-mon[77081]: pgmap v3459: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 22 15:22:27 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:27.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:27 compute-2 sudo[279085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:27 compute-2 sudo[279085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:27 compute-2 sudo[279085]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:27 compute-2 sudo[279110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:27 compute-2 sudo[279110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:27 compute-2 sudo[279110]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:28 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:28 compute-2 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:28.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:29.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:29 compute-2 ceph-mon[77081]: pgmap v3460: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 22 15:22:29 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:29 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:29.755+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:30.787+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:31.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:31 compute-2 ceph-mon[77081]: pgmap v3461: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 12 op/s
Jan 22 15:22:31 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:31.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:31.795+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:32 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:32 compute-2 ceph-mon[77081]: pgmap v3462: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 15:22:32 compute-2 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 15:22:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:32.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:33.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:33 compute-2 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:33 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:33.735+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:34 compute-2 ceph-mon[77081]: pgmap v3463: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 15:22:34 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:34.712+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:35.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:35 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:36.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:36 compute-2 ceph-mon[77081]: pgmap v3464: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 852 B/s wr, 21 op/s
Jan 22 15:22:36 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:37.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:37.765+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:38 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:38 compute-2 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:38.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:39 compute-2 ceph-mon[77081]: pgmap v3465: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 15:22:39 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:39 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:39.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:39.854+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:40 compute-2 ceph-mon[77081]: pgmap v3466: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 255 B/s wr, 10 op/s
Jan 22 15:22:40 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:40.842+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:41.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:41.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:41.836+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:41 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:42.808+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:42 compute-2 ceph-mon[77081]: pgmap v3467: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 22 15:22:42 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:42 compute-2 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:43 compute-2 podman[279143]: 2026-01-22 15:22:43.030216919 +0000 UTC m=+0.091969084 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:22:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:43.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:43.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:43.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:43 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:44.728+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:44 compute-2 ceph-mon[77081]: pgmap v3468: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:44 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:45.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:45.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:45.687+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:45 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:46.662+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:46 compute-2 ceph-mon[77081]: pgmap v3469: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:46 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:22:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:22:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:22:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:22:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:22:47.261 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:22:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:47.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:47 compute-2 sudo[279171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:47 compute-2 sudo[279171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:47 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:47 compute-2 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:47 compute-2 sudo[279171]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:48 compute-2 sudo[279196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:22:48 compute-2 sudo[279196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:22:48 compute-2 sudo[279196]: pam_unix(sudo:session): session closed for user root
Jan 22 15:22:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:48.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:48 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:48 compute-2 ceph-mon[77081]: pgmap v3470: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:49.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:49.648+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:49 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:50.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:51 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:51 compute-2 ceph-mon[77081]: pgmap v3471: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:51.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:51.627+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:52 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:52.663+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:53 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:53 compute-2 ceph-mon[77081]: pgmap v3472: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:53 compute-2 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:53.623+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:53.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:54 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:54.648+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:55 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:55 compute-2 ceph-mon[77081]: pgmap v3473: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:22:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:22:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:55.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:56 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:56.694+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:57 compute-2 podman[279226]: 2026-01-22 15:22:57.00122946 +0000 UTC m=+0.059932573 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 15:22:57 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:57 compute-2 ceph-mon[77081]: pgmap v3474: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:57.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:57.670+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:22:58 compute-2 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 15:22:58 compute-2 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:22:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:58.712+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:22:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:22:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:59.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:22:59 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:22:59 compute-2 ceph-mon[77081]: pgmap v3475: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:22:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:22:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:22:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:22:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:22:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:59.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:22:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #214. Immutable memtables: 0.
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.603179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 214
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380603250, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 2811, "num_deletes": 569, "total_data_size": 5296550, "memory_usage": 5378544, "flush_reason": "Manual Compaction"}
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #215: started
Jan 22 15:23:00 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:23:00 compute-2 ceph-mon[77081]: pgmap v3476: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:00 compute-2 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380629873, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 215, "file_size": 3454007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 103913, "largest_seqno": 106719, "table_properties": {"data_size": 3443431, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 33366, "raw_average_key_size": 23, "raw_value_size": 3417977, "raw_average_value_size": 2380, "num_data_blocks": 250, "num_entries": 1436, "num_filter_entries": 1436, "num_deletions": 569, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095195, "oldest_key_time": 1769095195, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 26740 microseconds, and 11081 cpu microseconds.
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.629930) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #215: 3454007 bytes OK
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.629956) [db/memtable_list.cc:519] [default] Level-0 commit table #215 started
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.632129) [db/memtable_list.cc:722] [default] Level-0 commit table #215: memtable #1 done
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.632147) EVENT_LOG_v1 {"time_micros": 1769095380632141, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.632167) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 5282664, prev total WAL file size 5282664, number of live WAL files 2.
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000211.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.634077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [215(3373KB)], [213(11MB)]
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380634119, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [215], "files_L6": [213], "score": -1, "input_data_size": 15469609, "oldest_snapshot_seqno": -1}
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #216: 14399 keys, 13609583 bytes, temperature: kUnknown
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380740088, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 216, "file_size": 13609583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13528898, "index_size": 43580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 394666, "raw_average_key_size": 27, "raw_value_size": 13281876, "raw_average_value_size": 922, "num_data_blocks": 1597, "num_entries": 14399, "num_filter_entries": 14399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 216, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.740387) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 13609583 bytes
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.741952) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.9 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 11.5 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 15552, records dropped: 1153 output_compression: NoCompression
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.741968) EVENT_LOG_v1 {"time_micros": 1769095380741960, "job": 138, "event": "compaction_finished", "compaction_time_micros": 106045, "compaction_time_cpu_micros": 44121, "output_level": 6, "num_output_files": 1, "total_output_size": 13609583, "num_input_records": 15552, "num_output_records": 14399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:23:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:00.741+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380742686, "job": 138, "event": "table_file_deletion", "file_number": 215}
Jan 22 15:23:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000213.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380745099, "job": 138, "event": "table_file_deletion", "file_number": 213}
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.633959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:00 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:01.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:01 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:01.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:01.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:02.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:03 compute-2 ceph-mon[77081]: pgmap v3477: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:03 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:03.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:03.807+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:04 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:04.765+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:05 compute-2 ceph-mon[77081]: pgmap v3478: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:05 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:05.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:05.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:05.720+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:06 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:06.675+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:07 compute-2 ceph-mon[77081]: pgmap v3479: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:07 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:07 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:07.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:07 compute-2 sudo[279250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:07 compute-2 sudo[279250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-2 sudo[279250]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:07 compute-2 sudo[279275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:23:07 compute-2 sudo[279275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-2 sudo[279275]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:07 compute-2 sudo[279300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:07 compute-2 sudo[279300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:07 compute-2 sudo[279300]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:07.643+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:07.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:07 compute-2 sudo[279325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:23:07 compute-2 sudo[279325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:08 compute-2 sudo[279369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:08 compute-2 sudo[279369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:08 compute-2 sudo[279369]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:08 compute-2 sudo[279325]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:08 compute-2 sudo[279406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:08 compute-2 sudo[279406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:08 compute-2 sudo[279406]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:08 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:08 compute-2 ceph-mon[77081]: pgmap v3480: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:23:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:23:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:23:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:23:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:23:08 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:23:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:08.608+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:09 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:09.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:09.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:09.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:10 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:10 compute-2 ceph-mon[77081]: pgmap v3481: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:10.670+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:11 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:11 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:11.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:11.651+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:11.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:12 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:12 compute-2 ceph-mon[77081]: pgmap v3482: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:12.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:13.587+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:13 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:13 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:14 compute-2 podman[279434]: 2026-01-22 15:23:14.069638246 +0000 UTC m=+0.130684532 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:23:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:14.584+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:14 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:14 compute-2 ceph-mon[77081]: pgmap v3483: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:15 compute-2 sudo[279462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:15 compute-2 sudo[279462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:15 compute-2 sudo[279462]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:15 compute-2 sudo[279487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:23:15 compute-2 sudo[279487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:15 compute-2 sudo[279487]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:15.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:15.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:16 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:23:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:23:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:16.585+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:17 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:17 compute-2 ceph-mon[77081]: pgmap v3484: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:17.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:17.557+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:17.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #217. Immutable memtables: 0.
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.729496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 217
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397729652, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 514, "num_deletes": 278, "total_data_size": 529492, "memory_usage": 538816, "flush_reason": "Manual Compaction"}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #218: started
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397733505, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 218, "file_size": 315797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 106724, "largest_seqno": 107233, "table_properties": {"data_size": 313176, "index_size": 592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8080, "raw_average_key_size": 21, "raw_value_size": 307427, "raw_average_value_size": 819, "num_data_blocks": 25, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 278, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095381, "oldest_key_time": 1769095381, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 4096 microseconds, and 1353 cpu microseconds.
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733595) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #218: 315797 bytes OK
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733646) [db/memtable_list.cc:519] [default] Level-0 commit table #218 started
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735136) [db/memtable_list.cc:722] [default] Level-0 commit table #218: memtable #1 done
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735147) EVENT_LOG_v1 {"time_micros": 1769095397735144, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 526300, prev total WAL file size 526300, number of live WAL files 2.
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000214.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303037' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [218(308KB)], [216(12MB)]
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397735885, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [218], "files_L6": [216], "score": -1, "input_data_size": 13925380, "oldest_snapshot_seqno": -1}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #219: 14208 keys, 10036399 bytes, temperature: kUnknown
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397796752, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 219, "file_size": 10036399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9961674, "index_size": 38132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 390776, "raw_average_key_size": 27, "raw_value_size": 9722702, "raw_average_value_size": 684, "num_data_blocks": 1369, "num_entries": 14208, "num_filter_entries": 14208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 219, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.797148) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 10036399 bytes
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.798591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.4 rd, 164.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.9) write-amplify(31.8) OK, records in: 14774, records dropped: 566 output_compression: NoCompression
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.798629) EVENT_LOG_v1 {"time_micros": 1769095397798613, "job": 140, "event": "compaction_finished", "compaction_time_micros": 60971, "compaction_time_cpu_micros": 27932, "output_level": 6, "num_output_files": 1, "total_output_size": 10036399, "num_input_records": 14774, "num_output_records": 14208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397798949, "job": 140, "event": "table_file_deletion", "file_number": 218}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000216.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397804127, "job": 140, "event": "table_file_deletion", "file_number": 216}
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:17 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:18 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:18 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:18.514+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:19 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:19 compute-2 ceph-mon[77081]: pgmap v3485: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:23:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2891920492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:23:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:19.564+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:20 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:20.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:21 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:21 compute-2 ceph-mon[77081]: pgmap v3486: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:21.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:21.495+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:21.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:22 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:22 compute-2 ceph-mon[77081]: pgmap v3487: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:22.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:23 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:23 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:23.475+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:23.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:24 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:24 compute-2 ceph-mon[77081]: pgmap v3488: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:24.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:25 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:25.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:25.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:25.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:26.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:26 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:26 compute-2 ceph-mon[77081]: pgmap v3489: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:27.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:27.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:27 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:27 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:27 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:27 compute-2 podman[279518]: 2026-01-22 15:23:27.975259469 +0000 UTC m=+0.040503887 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:23:28 compute-2 sudo[279537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:28 compute-2 sudo[279537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:28 compute-2 sudo[279537]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:28 compute-2 sudo[279562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:28 compute-2 sudo[279562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:28 compute-2 sudo[279562]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:28.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:29 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:29 compute-2 ceph-mon[77081]: pgmap v3490: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:29.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:29.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:29.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:30 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:30.538+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:31 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:31 compute-2 ceph-mon[77081]: pgmap v3491: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:31.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:32.514+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:32 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:32 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:32 compute-2 ceph-mon[77081]: pgmap v3492: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:33.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:33 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:33 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:34.522+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:35 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:35 compute-2 ceph-mon[77081]: pgmap v3493: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:35.485+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:35.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:36.495+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:36 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:37.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:37 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:37 compute-2 ceph-mon[77081]: pgmap v3494: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:37 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:38.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:39 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:39 compute-2 ceph-mon[77081]: pgmap v3495: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:39 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:39.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:39.479+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:40 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:40 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:40 compute-2 ceph-mon[77081]: pgmap v3496: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:40.500+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:41.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:41.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:41 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:42.467+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:43.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:43.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:43.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:43 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:43 compute-2 ceph-mon[77081]: pgmap v3497: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:43 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:44.456+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:44 compute-2 podman[279595]: 2026-01-22 15:23:44.584610543 +0000 UTC m=+0.101655612 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 22 15:23:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:44 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:44 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:44 compute-2 ceph-mon[77081]: pgmap v3498: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:45.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:45.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:45 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:46.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:46 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:46 compute-2 ceph-mon[77081]: pgmap v3499: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:23:47.261 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:23:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:23:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:23:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:23:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:23:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:47.403+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:47.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:47 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:47 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:48.427+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:48 compute-2 sudo[279623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:48 compute-2 sudo[279623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:48 compute-2 sudo[279623]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:48 compute-2 sudo[279648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:23:48 compute-2 sudo[279648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:23:48 compute-2 sudo[279648]: pam_unix(sudo:session): session closed for user root
Jan 22 15:23:48 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:48 compute-2 ceph-mon[77081]: pgmap v3500: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:49.402+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:50 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:50.411+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:51 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:51 compute-2 ceph-mon[77081]: pgmap v3501: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:51.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:52 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:52.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:53 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:53 compute-2 ceph-mon[77081]: pgmap v3502: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:53 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:53.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:53.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:54 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:54.444+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:23:55 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:55 compute-2 ceph-mon[77081]: pgmap v3503: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:55.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:55.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:56 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:56.448+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:57 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:57 compute-2 ceph-mon[77081]: pgmap v3504: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:23:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:23:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:57.458+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #220. Immutable memtables: 0.
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.775537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 220
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437775572, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 325, "total_data_size": 1095315, "memory_usage": 1111760, "flush_reason": "Manual Compaction"}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #221: started
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437783810, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 221, "file_size": 717997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 107238, "largest_seqno": 108047, "table_properties": {"data_size": 714385, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10658, "raw_average_key_size": 20, "raw_value_size": 706207, "raw_average_value_size": 1368, "num_data_blocks": 53, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 325, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095398, "oldest_key_time": 1769095398, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 8339 microseconds, and 2842 cpu microseconds.
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783871) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #221: 717997 bytes OK
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783896) [db/memtable_list.cc:519] [default] Level-0 commit table #221 started
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787223) [db/memtable_list.cc:722] [default] Level-0 commit table #221: memtable #1 done
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787240) EVENT_LOG_v1 {"time_micros": 1769095437787235, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787257) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 1090708, prev total WAL file size 1090708, number of live WAL files 2.
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000217.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787795) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035303334' seq:72057594037927935, type:22 .. '6C6F676D0035323837' seq:0, type:0; will stop at (end)
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [221(701KB)], [219(9801KB)]
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437787860, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [221], "files_L6": [219], "score": -1, "input_data_size": 10754396, "oldest_snapshot_seqno": -1}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #222: 14065 keys, 10584053 bytes, temperature: kUnknown
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437868384, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 222, "file_size": 10584053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509312, "index_size": 38468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35205, "raw_key_size": 388461, "raw_average_key_size": 27, "raw_value_size": 10271914, "raw_average_value_size": 730, "num_data_blocks": 1380, "num_entries": 14065, "num_filter_entries": 14065, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 222, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.868666) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 10584053 bytes
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.870527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.5 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(29.7) write-amplify(14.7) OK, records in: 14724, records dropped: 659 output_compression: NoCompression
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.870553) EVENT_LOG_v1 {"time_micros": 1769095437870536, "job": 142, "event": "compaction_finished", "compaction_time_micros": 80574, "compaction_time_cpu_micros": 28607, "output_level": 6, "num_output_files": 1, "total_output_size": 10584053, "num_input_records": 14724, "num_output_records": 14065, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437870814, "job": 142, "event": "table_file_deletion", "file_number": 221}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000219.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437873091, "job": 142, "event": "table_file_deletion", "file_number": 219}
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:57 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:23:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:58.451+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:58 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:58 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:58 compute-2 ceph-mon[77081]: pgmap v3505: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:23:58 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:23:59 compute-2 podman[279680]: 2026-01-22 15:23:59.033184224 +0000 UTC m=+0.080642910 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 15:23:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:23:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:59.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:23:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:59.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:23:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:23:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:23:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:23:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:23:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:00 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:00.458+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:01 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:01 compute-2 ceph-mon[77081]: pgmap v3506: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:01.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:01.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:01.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:02.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:02 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:02 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:02 compute-2 ceph-mon[77081]: pgmap v3507: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 15:24:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:03.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 15:24:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:03.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:03.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:04 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:04 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:04.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:05 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:05 compute-2 ceph-mon[77081]: pgmap v3508: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:05.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:05.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:06 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:06.421+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:07 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:07 compute-2 ceph-mon[77081]: pgmap v3509: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:07.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:07.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:08 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:08 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:08.441+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:08 compute-2 sudo[279704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:08 compute-2 sudo[279704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:08 compute-2 sudo[279704]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:08 compute-2 sudo[279729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:08 compute-2 sudo[279729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:08 compute-2 sudo[279729]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:09 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:09 compute-2 ceph-mon[77081]: pgmap v3510: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:09.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:09.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:10 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:10.415+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:11 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:11 compute-2 ceph-mon[77081]: pgmap v3511: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 0 op/s
Jan 22 15:24:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:11.435+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:11.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:12.439+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:12 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:13.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:13.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:13 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-2 ceph-mon[77081]: pgmap v3512: 305 pgs: 2 active+clean+laggy, 303 active+clean; 844 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 22 15:24:13 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:13 compute-2 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:14.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:15 compute-2 podman[279758]: 2026-01-22 15:24:15.053452398 +0000 UTC m=+0.104765490 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 15:24:15 compute-2 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 15:24:15 compute-2 ceph-mon[77081]: pgmap v3513: 305 pgs: 2 active+clean+laggy, 303 active+clean; 855 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 586 KiB/s wr, 19 op/s
Jan 22 15:24:15 compute-2 sudo[279784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:15 compute-2 sudo[279784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-2 sudo[279784]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:15 compute-2 sudo[279809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:24:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:15.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:15 compute-2 sudo[279809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-2 sudo[279809]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:15.518+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:15 compute-2 sudo[279834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:15 compute-2 sudo[279834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-2 sudo[279834]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:15 compute-2 sudo[279859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:24:15 compute-2 sudo[279859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:15.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:16 compute-2 sudo[279859]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:16 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:16.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:17 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:17 compute-2 ceph-mon[77081]: pgmap v3514: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:24:17 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:24:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:17.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:17.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:17.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:18 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:18 compute-2 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:18.477+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:19 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:19 compute-2 ceph-mon[77081]: pgmap v3515: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:19 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:24:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1614632194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:24:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:19.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:19.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:20 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:20 compute-2 ceph-mon[77081]: pgmap v3516: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:20.474+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:21 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:21.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:21.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:21.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:22 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:22 compute-2 ceph-mon[77081]: pgmap v3517: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 22 15:24:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:22.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:23 compute-2 sudo[279919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:23 compute-2 sudo[279919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:23 compute-2 sudo[279919]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:23 compute-2 sudo[279944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:24:23 compute-2 sudo[279944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:23 compute-2 sudo[279944]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:23.469+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:23 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:23 compute-2 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:24:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:24:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:23.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:24.509+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:24 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:24 compute-2 ceph-mon[77081]: pgmap v3518: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 29 op/s
Jan 22 15:24:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:25.529+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:25 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:26.566+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:26 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:26 compute-2 ceph-mon[77081]: pgmap v3519: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 845 KiB/s wr, 17 op/s
Jan 22 15:24:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:27.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:27.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:27 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:27 compute-2 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:28.593+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:28 compute-2 sudo[279971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:28 compute-2 sudo[279971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:28 compute-2 sudo[279971]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:28 compute-2 sudo[279996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:28 compute-2 sudo[279996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:28 compute-2 sudo[279996]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:29 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:29 compute-2 ceph-mon[77081]: pgmap v3520: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:29.580+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:29.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:30 compute-2 podman[280022]: 2026-01-22 15:24:30.014904432 +0000 UTC m=+0.072440343 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:24:30 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:30.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:31 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:31 compute-2 ceph-mon[77081]: pgmap v3521: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:31.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:31.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:32.520+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:32 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:33.507+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:33.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:34 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:34 compute-2 ceph-mon[77081]: pgmap v3522: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:34 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:34.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:35 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:35 compute-2 ceph-mon[77081]: pgmap v3523: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:35.500+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:35.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:36 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:36.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:37.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:37 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:37 compute-2 ceph-mon[77081]: pgmap v3524: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:37 compute-2 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:37.521+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:37.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:38.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:38 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:38 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:38 compute-2 ceph-mon[77081]: pgmap v3525: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:39.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:39.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:39.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:39 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:40.521+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:41 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:41 compute-2 ceph-mon[77081]: pgmap v3526: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:41.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:41.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:42 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:42.518+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:43 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:43 compute-2 ceph-mon[77081]: pgmap v3527: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:43 compute-2 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:43.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:44 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 15:24:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:44.533+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:45 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:45 compute-2 ceph-mon[77081]: pgmap v3528: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:45.560+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:46 compute-2 podman[280050]: 2026-01-22 15:24:46.057580032 +0000 UTC m=+0.111521189 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 15:24:46 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:46 compute-2 ceph-mon[77081]: pgmap v3529: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:46.547+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:24:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:24:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:24:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:24:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:24:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:24:47 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:47 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:47.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:48.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:48 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:48 compute-2 ceph-mon[77081]: pgmap v3530: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:48 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 6478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:48 compute-2 sudo[280076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:48 compute-2 sudo[280076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:48 compute-2 sudo[280076]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:49 compute-2 sudo[280102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:24:49 compute-2 sudo[280102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:24:49 compute-2 sudo[280102]: pam_unix(sudo:session): session closed for user root
Jan 22 15:24:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:49.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:49 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:49.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:50.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:50 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:50 compute-2 ceph-mon[77081]: pgmap v3531: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:51.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:51.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:51.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:51 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:52.585+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:53 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:53 compute-2 ceph-mon[77081]: pgmap v3532: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:53 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 6483 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:53.596+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:54 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:54.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:24:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:55.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:55 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:55 compute-2 ceph-mon[77081]: pgmap v3533: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:55 compute-2 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:24:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:55.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:55.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:56 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:56 compute-2 ceph-mon[77081]: pgmap v3534: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:56.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:24:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:24:57 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:57.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:58 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:58 compute-2 ceph-mon[77081]: pgmap v3535: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:24:58 compute-2 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 6488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:24:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:58.676+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:24:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:59.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:24:59 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:59.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:24:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:24:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:24:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:24:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:59.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:24:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:00 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:00 compute-2 ceph-mon[77081]: pgmap v3536: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:00.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:01 compute-2 podman[280133]: 2026-01-22 15:25:01.045659893 +0000 UTC m=+0.088545260 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 22 15:25:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:01.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:01.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:01.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:01 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:02.766+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:03 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:03 compute-2 ceph-mon[77081]: pgmap v3537: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:03 compute-2 ceph-mon[77081]: Health check update: 108 slow ops, oldest one blocked for 6493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:03.813+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:04 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:04.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:05 compute-2 ceph-mon[77081]: pgmap v3538: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:05 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:05.812+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:06 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:06.801+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-2 ceph-mon[77081]: pgmap v3539: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:07 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:07.786+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:08 compute-2 ceph-mon[77081]: pgmap v3540: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:08 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:08 compute-2 ceph-mon[77081]: Health check update: 108 slow ops, oldest one blocked for 6498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:08.791+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:09 compute-2 sudo[280156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:09 compute-2 sudo[280156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:09 compute-2 sudo[280156]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:09 compute-2 sudo[280181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:09 compute-2 sudo[280181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:09 compute-2 sudo[280181]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:09 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:09.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:10.801+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:10 compute-2 ceph-mon[77081]: pgmap v3541: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:10 compute-2 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 15:25:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:11.777+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:11.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:12 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:12.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:13 compute-2 ceph-mon[77081]: pgmap v3542: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:13 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:13 compute-2 ceph-mon[77081]: Health check update: 108 slow ops, oldest one blocked for 6503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:13.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:13.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:13.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:14 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:14.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:15 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:15 compute-2 ceph-mon[77081]: pgmap v3543: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:15 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:15.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:15.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:16 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:16 compute-2 ceph-mon[77081]: pgmap v3544: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:16.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:17 compute-2 podman[280210]: 2026-01-22 15:25:17.030834346 +0000 UTC m=+0.099606363 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:25:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:17.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:17 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:17.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:17.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:25:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:25:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:25:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:25:18 compute-2 ceph-mon[77081]: pgmap v3545: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:18 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:18 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:25:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:25:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:18.740+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:19.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:19 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:19.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:19.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:20.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:20 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:20 compute-2 ceph-mon[77081]: pgmap v3546: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:21.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:21.720+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:21.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:22 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:22 compute-2 ceph-mon[77081]: pgmap v3547: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:22.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:23 compute-2 sudo[280239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:23 compute-2 sudo[280239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-2 sudo[280239]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:23 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:23 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:23 compute-2 sudo[280264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:25:23 compute-2 sudo[280264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-2 sudo[280264]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:23 compute-2 sudo[280289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:23 compute-2 sudo[280289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-2 sudo[280289]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:23 compute-2 sudo[280314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:25:23 compute-2 sudo[280314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:23.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:23.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:23 compute-2 sudo[280314]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:24 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:24 compute-2 ceph-mon[77081]: pgmap v3548: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:25:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:25:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:25:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:25:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:25:24 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:25:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:24.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:25.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:25.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:25 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:25 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:26.651+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:26 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:26 compute-2 ceph-mon[77081]: pgmap v3549: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:27.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:27.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:27 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:27.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:28.700+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:28 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:28 compute-2 ceph-mon[77081]: pgmap v3550: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:28 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:25:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 20K writes, 109K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1719 writes, 9797 keys, 1719 commit groups, 1.0 writes per commit group, ingest: 16.28 MB, 0.03 MB/s
                                           Interval WAL: 1719 writes, 1719 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.4      2.14              0.43        71    0.030       0      0       0.0       0.0
                                             L6      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    116.7    101.0      6.76              2.31        70    0.097    742K    40K       0.0       0.0
                                            Sum      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.9     88.7     89.8      8.90              2.74       141    0.063    742K    40K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     37.7     37.9      1.99              0.23        12    0.166     89K   4955       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    116.7    101.0      6.76              2.31        70    0.097    742K    40K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.5      2.13              0.43        70    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.114, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.78 GB write, 0.12 MB/s write, 0.77 GB read, 0.12 MB/s read, 8.9 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 2.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 83.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000558 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4357,78.87 MB,25.9432%) FilterBlock(141,2.02 MB,0.663491%) IndexBlock(141,2.50 MB,0.823397%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:25:29 compute-2 sudo[280373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:29 compute-2 sudo[280373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:29 compute-2 sudo[280373]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:29 compute-2 sudo[280398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:29 compute-2 sudo[280398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:29 compute-2 sudo[280398]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:29.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:29.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:29 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:30 compute-2 sudo[280423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:30 compute-2 sudo[280423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:30 compute-2 sudo[280423]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:30 compute-2 sudo[280448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:25:30 compute-2 sudo[280448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:30 compute-2 sudo[280448]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:30.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:31 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:31 compute-2 ceph-mon[77081]: pgmap v3551: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:25:31 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:25:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:31.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:31.721+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:32 compute-2 podman[280474]: 2026-01-22 15:25:32.047286137 +0000 UTC m=+0.089824763 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:25:32 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:32.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:33 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:33 compute-2 ceph-mon[77081]: pgmap v3552: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:33 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:33.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:33.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:34 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:34.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:35 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:35 compute-2 ceph-mon[77081]: pgmap v3553: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:35.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:35.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:36 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:36.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:37 compute-2 ceph-mon[77081]: pgmap v3554: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:37 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:37.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:37.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:37.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:38 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:38 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:38.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:39 compute-2 ceph-mon[77081]: pgmap v3555: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:39 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:39.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:39.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:40.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:41 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:41.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:41.772+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:42 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:42 compute-2 ceph-mon[77081]: pgmap v3556: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:42 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:42.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:43 compute-2 ceph-mon[77081]: pgmap v3557: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:43 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:43 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:43.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:43.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:44 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:44.786+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:45 compute-2 ceph-mon[77081]: pgmap v3558: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:45 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:45.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:45.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:45.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:46.819+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:47 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:25:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:25:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:25:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:25:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:25:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:25:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:47.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:47.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:47.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:48 compute-2 podman[280501]: 2026-01-22 15:25:48.075150456 +0000 UTC m=+0.125506150 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:25:48 compute-2 ceph-mon[77081]: pgmap v3559: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:48 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:48 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:48 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:48.769+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:49 compute-2 sudo[280529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:49 compute-2 sudo[280529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:49 compute-2 sudo[280529]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:49 compute-2 ceph-mon[77081]: pgmap v3560: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:49 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:49 compute-2 sudo[280554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:25:49 compute-2 sudo[280554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:25:49 compute-2 sudo[280554]: pam_unix(sudo:session): session closed for user root
Jan 22 15:25:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:49.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:49.763+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:50 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:50 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:50 compute-2 ceph-mon[77081]: pgmap v3561: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:50.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:51 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:51.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:51.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:52 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:52 compute-2 ceph-mon[77081]: pgmap v3562: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:52.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:53.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:53.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:53 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:53 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #223. Immutable memtables: 0.
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.083891) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 223
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554083995, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 1928, "num_deletes": 449, "total_data_size": 3296326, "memory_usage": 3359256, "flush_reason": "Manual Compaction"}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #224: started
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554150028, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 224, "file_size": 2150401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 108052, "largest_seqno": 109975, "table_properties": {"data_size": 2143190, "index_size": 3576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23099, "raw_average_key_size": 22, "raw_value_size": 2125694, "raw_average_value_size": 2088, "num_data_blocks": 155, "num_entries": 1018, "num_filter_entries": 1018, "num_deletions": 449, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095438, "oldest_key_time": 1769095438, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 66189 microseconds, and 10398 cpu microseconds.
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.150094) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #224: 2150401 bytes OK
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.150120) [db/memtable_list.cc:519] [default] Level-0 commit table #224 started
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.205351) [db/memtable_list.cc:722] [default] Level-0 commit table #224: memtable #1 done
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.205392) EVENT_LOG_v1 {"time_micros": 1769095554205384, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.205415) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 3286617, prev total WAL file size 3286617, number of live WAL files 2.
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000220.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.206484) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [224(2100KB)], [222(10MB)]
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554206541, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [224], "files_L6": [222], "score": -1, "input_data_size": 12734454, "oldest_snapshot_seqno": -1}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #225: 14172 keys, 10835589 bytes, temperature: kUnknown
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554459951, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 225, "file_size": 10835589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10759977, "index_size": 39083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390614, "raw_average_key_size": 27, "raw_value_size": 10520563, "raw_average_value_size": 742, "num_data_blocks": 1404, "num_entries": 14172, "num_filter_entries": 14172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 225, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.460272) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 10835589 bytes
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.617648) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.2 rd, 42.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.1 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 15083, records dropped: 911 output_compression: NoCompression
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.617685) EVENT_LOG_v1 {"time_micros": 1769095554617672, "job": 144, "event": "compaction_finished", "compaction_time_micros": 253502, "compaction_time_cpu_micros": 33148, "output_level": 6, "num_output_files": 1, "total_output_size": 10835589, "num_input_records": 15083, "num_output_records": 14172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554618298, "job": 144, "event": "table_file_deletion", "file_number": 224}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000222.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554620136, "job": 144, "event": "table_file_deletion", "file_number": 222}
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.206405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:25:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:54.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:25:55 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:55 compute-2 ceph-mon[77081]: pgmap v3563: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:55.697+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:55.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:55.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:56 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:56.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:57 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:57 compute-2 ceph-mon[77081]: pgmap v3564: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:57.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:57.781+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:25:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:25:58 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:58 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:25:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:58.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:59 compute-2 ceph-mon[77081]: pgmap v3565: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:25:59 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:25:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:59.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:25:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:59.826+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:25:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:25:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:25:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:25:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:59.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:25:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:00 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:00.864+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:01.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:01 compute-2 ceph-mon[77081]: pgmap v3566: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:01 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:01.872+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:01.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:02.826+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:02 compute-2 ceph-mon[77081]: pgmap v3567: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:02 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:03 compute-2 podman[280586]: 2026-01-22 15:26:03.022132658 +0000 UTC m=+0.071149338 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 15:26:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:03.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:03.851+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:04 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:04 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:04.885+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:05 compute-2 ceph-mon[77081]: pgmap v3568: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:05 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:05.917+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:06 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:06.917+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:07 compute-2 ceph-mon[77081]: pgmap v3569: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:07 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:07.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:07.884+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:08 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:08 compute-2 ceph-mon[77081]: pgmap v3570: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:08 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:08.858+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:09 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:09 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:09 compute-2 sudo[280609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:09 compute-2 sudo[280609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:09 compute-2 sudo[280609]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:09 compute-2 sudo[280634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:09 compute-2 sudo[280634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:09 compute-2 sudo[280634]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:09.888+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:10 compute-2 ceph-mon[77081]: pgmap v3571: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:10 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:10.877+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:11 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:11.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:11.830+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:11.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:12 compute-2 ceph-mon[77081]: pgmap v3572: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:12 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:12.809+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:13 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:13 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:13.831+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:14 compute-2 ceph-mon[77081]: pgmap v3573: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:14 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:14.834+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:15 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:15.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:16 compute-2 ceph-mon[77081]: pgmap v3574: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:16 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:16.807+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:17 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:17.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:17.831+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:18 compute-2 ceph-mon[77081]: pgmap v3575: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:18 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:18 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:26:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2075344860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:26:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:18.851+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:19 compute-2 podman[280664]: 2026-01-22 15:26:19.024285489 +0000 UTC m=+0.080356552 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 15:26:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:19.891+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:20 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:20.930+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:21 compute-2 ceph-mon[77081]: pgmap v3576: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:21 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:21.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:21.950+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:22.961+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:23 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:23.960+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:24 compute-2 ceph-mon[77081]: pgmap v3577: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:24 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:24 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:24 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:24 compute-2 ceph-mon[77081]: pgmap v3578: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:25.011+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:25 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:26.532+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:26 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:26 compute-2 ceph-mon[77081]: pgmap v3579: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:27 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:27.575+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:28.529+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:28 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:28 compute-2 ceph-mon[77081]: pgmap v3580: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:28 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:28 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:29.573+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:29 compute-2 sudo[280695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:29 compute-2 sudo[280695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:29 compute-2 sudo[280695]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:29 compute-2 sudo[280720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:29 compute-2 sudo[280720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:29 compute-2 sudo[280720]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:30.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:30 compute-2 sudo[280745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:30 compute-2 sudo[280745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-2 sudo[280745]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:30 compute-2 sudo[280770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:26:30 compute-2 sudo[280770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-2 sudo[280770]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:30 compute-2 sudo[280795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:30 compute-2 sudo[280795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:30 compute-2 sudo[280795]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:30 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:30 compute-2 ceph-mon[77081]: pgmap v3581: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:30 compute-2 sudo[280820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:26:30 compute-2 sudo[280820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:26:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.5 total, 600.0 interval
                                           Cumulative writes: 13K writes, 43K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4531 syncs, 3.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 747 writes, 1292 keys, 747 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s
                                           Interval WAL: 747 writes, 310 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:26:31 compute-2 sudo[280820]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:31.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:31 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:31.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:32.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:33 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:33 compute-2 ceph-mon[77081]: pgmap v3582: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:33 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:26:33 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:26:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:33.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:33.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:33.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:34 compute-2 podman[280879]: 2026-01-22 15:26:34.005846168 +0000 UTC m=+0.062925880 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 15:26:34 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:34.577+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:35 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:35 compute-2 ceph-mon[77081]: pgmap v3583: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:35.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:35.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:36 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:36.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:37 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:37 compute-2 ceph-mon[77081]: pgmap v3584: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:37.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:38 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:38 compute-2 ceph-mon[77081]: pgmap v3585: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:38 compute-2 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:38.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:39 compute-2 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:26:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:39 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:26:39 compute-2 sudo[280900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:39 compute-2 sudo[280900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:39 compute-2 sudo[280900]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:39.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:39 compute-2 sudo[280925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:26:39 compute-2 sudo[280925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:39 compute-2 sudo[280925]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:40 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:40 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:40 compute-2 ceph-mon[77081]: pgmap v3586: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:40.541+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:41.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:41 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:41 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:42.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:42 compute-2 ceph-mon[77081]: pgmap v3587: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:43.542+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:43 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:43 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 6593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:43.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:44.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:44 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:44 compute-2 ceph-mon[77081]: pgmap v3588: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:45.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:45 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:45.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:46.558+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:46 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:46 compute-2 ceph-mon[77081]: pgmap v3589: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:26:47.265 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:26:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:26:47.265 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:26:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:26:47.265 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:26:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:47.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:47.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:47.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:47 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:47 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:47 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 6598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:48.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:49 compute-2 ceph-mon[77081]: pgmap v3590: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:49 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:26:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:49.520+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:49 compute-2 sudo[280964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:49 compute-2 sudo[280964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:49 compute-2 sudo[280964]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:50 compute-2 podman[280955]: 2026-01-22 15:26:50.02033431 +0000 UTC m=+0.083650960 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:26:50 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:50 compute-2 sudo[281005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:26:50 compute-2 sudo[281005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:26:50 compute-2 sudo[281005]: pam_unix(sudo:session): session closed for user root
Jan 22 15:26:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:50.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:51 compute-2 ceph-mon[77081]: pgmap v3591: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:51 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:51.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:51.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:51.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:52 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:52.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:53 compute-2 ceph-mon[77081]: pgmap v3592: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:53 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:53 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:53.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:26:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:53.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:26:54 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:54.517+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:26:55 compute-2 ceph-mon[77081]: pgmap v3593: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:55 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:55.509+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:26:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:26:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:55.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:56.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:57 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:57.497+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:57.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:57.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:26:58 compute-2 ceph-mon[77081]: pgmap v3594: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:58 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:58 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:58.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:59 compute-2 ceph-mon[77081]: pgmap v3595: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:26:59 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:26:59 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:59.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:26:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:26:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:26:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:26:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:00 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:00.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:01 compute-2 ceph-mon[77081]: pgmap v3596: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:01.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:02 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:02.599+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:03 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:03 compute-2 ceph-mon[77081]: pgmap v3597: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 22 15:27:03 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:03.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:04 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:04.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:05 compute-2 podman[281039]: 2026-01-22 15:27:05.020581711 +0000 UTC m=+0.075398201 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 22 15:27:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:05 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:05 compute-2 ceph-mon[77081]: pgmap v3598: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Jan 22 15:27:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:05.607+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:05.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:06.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:06 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:06.600+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:07 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:07 compute-2 ceph-mon[77081]: pgmap v3599: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:07.630+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:07.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:08.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:08 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:08 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:08.620+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:09 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:09 compute-2 ceph-mon[77081]: pgmap v3600: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:09.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:09.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:10.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:10 compute-2 sudo[281061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:10 compute-2 sudo[281061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:10 compute-2 sudo[281061]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:10 compute-2 sudo[281086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:10 compute-2 sudo[281086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:10 compute-2 sudo[281086]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:10 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:10 compute-2 ceph-mon[77081]: pgmap v3601: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:10.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:11 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:11.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:11.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:12.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:12 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:12 compute-2 ceph-mon[77081]: pgmap v3602: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 22 15:27:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:12.781+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:13 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:13 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:13.822+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:14.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:14 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:14 compute-2 ceph-mon[77081]: pgmap v3603: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Jan 22 15:27:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:14.848+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:15 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:15.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:16.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:16.798+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:16 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:16 compute-2 ceph-mon[77081]: pgmap v3604: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 112 op/s
Jan 22 15:27:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:17.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:17 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:17 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:18.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:18 compute-2 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:27:18 compute-2 ceph-mon[77081]: pgmap v3605: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:18 compute-2 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:19.833+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:20.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:20 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:27:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3894192697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:27:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:20.795+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:21 compute-2 podman[281117]: 2026-01-22 15:27:21.02764103 +0000 UTC m=+0.082728535 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 15:27:21 compute-2 ceph-mon[77081]: pgmap v3606: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:21 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:21 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:21.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:22.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:22 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:22 compute-2 ceph-mon[77081]: pgmap v3607: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:22.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:23 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:23 compute-2 ceph-mon[77081]: Health check update: 80 slow ops, oldest one blocked for 6633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:23.788+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:24.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:24 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:24 compute-2 ceph-mon[77081]: pgmap v3608: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:24.817+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:25 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:25.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:25.845+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:26.813+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:27 compute-2 ceph-mon[77081]: pgmap v3609: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 22 15:27:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:27.843+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:28.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:28 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:28 compute-2 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 15:27:28 compute-2 ceph-mon[77081]: pgmap v3610: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:28 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:28 compute-2 ceph-mon[77081]: Health check update: 80 slow ops, oldest one blocked for 6638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:28.891+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:29 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:29.928+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:30 compute-2 sudo[281150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:30 compute-2 sudo[281150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:30 compute-2 sudo[281150]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:30 compute-2 sudo[281175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:30 compute-2 sudo[281175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:30 compute-2 sudo[281175]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:30 compute-2 ceph-mon[77081]: pgmap v3611: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:30 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:30.962+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:31.939+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:32 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:32.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:32.970+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:33 compute-2 ceph-mon[77081]: pgmap v3612: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:33 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:33 compute-2 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:33.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:34.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:34 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:34.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:35.006+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:35 compute-2 ceph-mon[77081]: pgmap v3613: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:35 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:35.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:36 compute-2 podman[281203]: 2026-01-22 15:27:36.008107745 +0000 UTC m=+0.069855414 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:27:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:36.052+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:36 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:37.034+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:37 compute-2 ceph-mon[77081]: pgmap v3614: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:37 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:37.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:38.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:38 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:38 compute-2 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:38.995+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:39 compute-2 ceph-mon[77081]: pgmap v3615: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:39 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:39 compute-2 sudo[281226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:39 compute-2 sudo[281226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-2 sudo[281226]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:39 compute-2 sudo[281251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:27:39 compute-2 sudo[281251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-2 sudo[281251]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:39 compute-2 sudo[281276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:39 compute-2 sudo[281276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-2 sudo[281276]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:39.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:39 compute-2 sudo[281301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:27:39 compute-2 sudo[281301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:39.986+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:40 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:40 compute-2 sudo[281301]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:41.034+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:41.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:42 compute-2 ceph-mon[77081]: pgmap v3616: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:42 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:42.077+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:43 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:43 compute-2 ceph-mon[77081]: pgmap v3617: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:43 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:43.052+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:43.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:43 compute-2 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:43 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:27:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:27:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:44.032+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:44 compute-2 ceph-mon[77081]: pgmap v3618: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:44 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:45.054+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:45 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:46.017+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:47 compute-2 ceph-mon[77081]: pgmap v3619: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:47 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:47.043+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:27:47.266 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:27:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:27:47.267 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:27:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:27:47.267 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:27:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:48 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:48 compute-2 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:48.071+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:49.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:49 compute-2 ceph-mon[77081]: pgmap v3620: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:49 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:49.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:50.055+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:50 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:50 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:27:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:50 compute-2 sudo[281363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:50 compute-2 sudo[281363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:50 compute-2 sudo[281363]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:50 compute-2 sudo[281386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:50 compute-2 sudo[281386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:50 compute-2 sudo[281386]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:50 compute-2 sudo[281408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:27:50 compute-2 sudo[281408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:50 compute-2 sudo[281408]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:50 compute-2 sudo[281436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:27:50 compute-2 sudo[281436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:27:50 compute-2 sudo[281436]: pam_unix(sudo:session): session closed for user root
Jan 22 15:27:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:51.058+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:51 compute-2 ceph-mon[77081]: pgmap v3621: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:51 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:52 compute-2 podman[281464]: 2026-01-22 15:27:52.055188023 +0000 UTC m=+0.102237763 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:27:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:52.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:52 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:53.096+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:53 compute-2 ceph-mon[77081]: pgmap v3622: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:53 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:53 compute-2 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:27:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:53.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:27:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:54.088+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:54 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:55.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:55 compute-2 ceph-mon[77081]: pgmap v3623: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:55 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:27:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:55.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:56.172+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:56 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:57.156+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:57 compute-2 ceph-mon[77081]: pgmap v3624: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:57 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:27:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:27:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:27:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:58.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:27:58 compute-2 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 15:27:58 compute-2 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:27:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:59.149+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:27:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:27:59 compute-2 ceph-mon[77081]: pgmap v3625: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:27:59 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:27:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:27:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:27:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:00.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:00.117+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:00 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:01.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:01 compute-2 ceph-mon[77081]: pgmap v3626: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:01 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:02.122+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:02 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:02 compute-2 ceph-mon[77081]: pgmap v3627: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:03.087+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:03 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:03 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:03.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:04.076+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:04 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:04 compute-2 ceph-mon[77081]: pgmap v3628: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:05.066+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:05 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:05 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:06.094+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:06.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:06 compute-2 ceph-mon[77081]: pgmap v3629: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:06 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:07 compute-2 podman[281499]: 2026-01-22 15:28:07.003164465 +0000 UTC m=+0.054873296 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 22 15:28:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:07.065+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:08.064+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:08 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:09.064+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:09 compute-2 ceph-mon[77081]: pgmap v3630: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:09 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:09 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:10.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:10 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:10 compute-2 sudo[281520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:10 compute-2 sudo[281520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:10 compute-2 sudo[281520]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:10 compute-2 sudo[281545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:10 compute-2 sudo[281545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:10 compute-2 sudo[281545]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:11.035+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:11 compute-2 ceph-mon[77081]: pgmap v3631: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:11 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:12.053+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:12.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:12 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:12 compute-2 ceph-mon[77081]: pgmap v3632: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:13.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:13.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:13 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:13 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6683 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:14.082+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:15.034+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:15 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:15 compute-2 ceph-mon[77081]: pgmap v3633: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:15 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:16.056+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:16 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:16 compute-2 ceph-mon[77081]: pgmap v3634: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:16 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:17.053+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:17 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:18.010+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:18.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:19.017+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:19 compute-2 ceph-mon[77081]: pgmap v3635: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:19 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:19 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:28:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4199670489' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:28:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:20.018+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:21.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:21 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:21 compute-2 ceph-mon[77081]: pgmap v3636: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:21 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:22.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:22 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:23.022+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:23 compute-2 podman[281577]: 2026-01-22 15:28:23.022790618 +0000 UTC m=+0.075666273 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 15:28:23 compute-2 ceph-mon[77081]: pgmap v3637: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:23 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:23.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:24.057+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:24 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:24 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:25.049+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:25 compute-2 ceph-mon[77081]: pgmap v3638: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:25 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:26.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:27.004+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:27 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:27 compute-2 ceph-mon[77081]: pgmap v3639: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:27 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:27.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:28.038+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:28 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:28 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:29.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:29 compute-2 ceph-mon[77081]: pgmap v3640: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:29 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:29 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:29.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:29.982+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:30 compute-2 ceph-mon[77081]: pgmap v3641: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:30 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:30 compute-2 sudo[281606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:30 compute-2 sudo[281606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:30 compute-2 sudo[281606]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:30 compute-2 sudo[281631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:30 compute-2 sudo[281631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:30 compute-2 sudo[281631]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:31.021+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:31 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:31.976+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:32 compute-2 ceph-mon[77081]: pgmap v3642: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:32 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:33.008+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:33.974+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:34 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:34 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:34.960+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:35 compute-2 ceph-mon[77081]: pgmap v3643: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:35 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:35.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:35.932+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:36.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:36 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:36 compute-2 ceph-mon[77081]: pgmap v3644: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:36 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:36.970+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:37 compute-2 podman[281660]: 2026-01-22 15:28:37.987253498 +0000 UTC m=+0.050659882 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 15:28:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:38.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:38 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:38 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:39.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:39 compute-2 ceph-mon[77081]: pgmap v3645: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:39 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:40.032+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:40.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:40.986+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:40 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:40 compute-2 ceph-mon[77081]: pgmap v3646: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:40 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:42.026+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:42.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:42 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:43.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #226. Immutable memtables: 0.
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.496630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 226
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723496689, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 2692, "num_deletes": 542, "total_data_size": 4859953, "memory_usage": 4936368, "flush_reason": "Manual Compaction"}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #227: started
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723529425, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 227, "file_size": 3165791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 109980, "largest_seqno": 112667, "table_properties": {"data_size": 3155857, "index_size": 5403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31224, "raw_average_key_size": 22, "raw_value_size": 3131924, "raw_average_value_size": 2304, "num_data_blocks": 228, "num_entries": 1359, "num_filter_entries": 1359, "num_deletions": 542, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095554, "oldest_key_time": 1769095554, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 32831 microseconds, and 9196 cpu microseconds.
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.529468) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #227: 3165791 bytes OK
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.529490) [db/memtable_list.cc:519] [default] Level-0 commit table #227 started
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.533395) [db/memtable_list.cc:722] [default] Level-0 commit table #227: memtable #1 done
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.533419) EVENT_LOG_v1 {"time_micros": 1769095723533412, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.533444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 4846747, prev total WAL file size 4846747, number of live WAL files 2.
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000223.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.534721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035323836' seq:72057594037927935, type:22 .. '6C6F676D0035353339' seq:0, type:0; will stop at (end)
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [227(3091KB)], [225(10MB)]
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723534750, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [227], "files_L6": [225], "score": -1, "input_data_size": 14001380, "oldest_snapshot_seqno": -1}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #228: 14432 keys, 13749815 bytes, temperature: kUnknown
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723837940, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 228, "file_size": 13749815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13669490, "index_size": 43156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36101, "raw_key_size": 395987, "raw_average_key_size": 27, "raw_value_size": 13422707, "raw_average_value_size": 930, "num_data_blocks": 1574, "num_entries": 14432, "num_filter_entries": 14432, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 228, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.838202) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 13749815 bytes
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.841542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.2 rd, 45.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.3) OK, records in: 15531, records dropped: 1099 output_compression: NoCompression
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.841568) EVENT_LOG_v1 {"time_micros": 1769095723841556, "job": 146, "event": "compaction_finished", "compaction_time_micros": 303274, "compaction_time_cpu_micros": 30786, "output_level": 6, "num_output_files": 1, "total_output_size": 13749815, "num_input_records": 15531, "num_output_records": 14432, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723842342, "job": 146, "event": "table_file_deletion", "file_number": 227}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000225.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723844102, "job": 146, "event": "table_file_deletion", "file_number": 225}
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.534646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:43.982+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:44 compute-2 ceph-mon[77081]: pgmap v3647: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:44 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:44 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:44 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:45.003+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:45 compute-2 ceph-mon[77081]: pgmap v3648: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:45 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:46.017+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:46 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #229. Immutable memtables: 0.
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.884756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 229
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726884852, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 308, "num_deletes": 258, "total_data_size": 128067, "memory_usage": 135144, "flush_reason": "Manual Compaction"}
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #230: started
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726894378, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 230, "file_size": 83592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112672, "largest_seqno": 112975, "table_properties": {"data_size": 81614, "index_size": 141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5312, "raw_average_key_size": 18, "raw_value_size": 77703, "raw_average_value_size": 274, "num_data_blocks": 6, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095723, "oldest_key_time": 1769095723, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 9646 microseconds, and 1318 cpu microseconds.
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.894421) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #230: 83592 bytes OK
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.894442) [db/memtable_list.cc:519] [default] Level-0 commit table #230 started
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.922356) [db/memtable_list.cc:722] [default] Level-0 commit table #230: memtable #1 done
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.922421) EVENT_LOG_v1 {"time_micros": 1769095726922408, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.922455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 125797, prev total WAL file size 125797, number of live WAL files 2.
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000226.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.923179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [230(81KB)], [228(13MB)]
Jan 22 15:28:46 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726923262, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [230], "files_L6": [228], "score": -1, "input_data_size": 13833407, "oldest_snapshot_seqno": -1}
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #231: 14192 keys, 12058708 bytes, temperature: kUnknown
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727021649, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 231, "file_size": 12058708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11981266, "index_size": 40849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 391674, "raw_average_key_size": 27, "raw_value_size": 11739848, "raw_average_value_size": 827, "num_data_blocks": 1472, "num_entries": 14192, "num_filter_entries": 14192, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 231, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.022045) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 12058708 bytes
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.025764) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.3 rd, 122.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(309.7) write-amplify(144.3) OK, records in: 14715, records dropped: 523 output_compression: NoCompression
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.025781) EVENT_LOG_v1 {"time_micros": 1769095727025774, "job": 148, "event": "compaction_finished", "compaction_time_micros": 98566, "compaction_time_cpu_micros": 41832, "output_level": 6, "num_output_files": 1, "total_output_size": 12058708, "num_input_records": 14715, "num_output_records": 14192, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727026243, "job": 148, "event": "table_file_deletion", "file_number": 230}
Jan 22 15:28:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:47.026+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000228.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727028717, "job": 148, "event": "table_file_deletion", "file_number": 228}
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.923071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:28:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:28:47.268 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:28:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:28:47.268 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:28:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:28:47.268 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:28:47 compute-2 ceph-mon[77081]: pgmap v3649: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:47 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:48.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:48.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:49 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:49 compute-2 ceph-mon[77081]: pgmap v3650: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:49 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:49 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:49.069+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:49.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:50.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:50 compute-2 sudo[281686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:50 compute-2 sudo[281686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-2 sudo[281686]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-2 sudo[281711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:28:50 compute-2 sudo[281711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-2 sudo[281711]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:50 compute-2 sudo[281736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:50 compute-2 sudo[281736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-2 sudo[281736]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-2 sudo[281761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:28:50 compute-2 sudo[281761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:50 compute-2 sudo[281786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:50 compute-2 sudo[281786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-2 sudo[281786]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:50 compute-2 sudo[281818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:28:50 compute-2 sudo[281818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:28:50 compute-2 sudo[281818]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:51.144+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:51 compute-2 sudo[281761]: pam_unix(sudo:session): session closed for user root
Jan 22 15:28:51 compute-2 ceph-mon[77081]: pgmap v3651: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:51 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:51 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:28:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:28:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:52.159+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:53.175+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:53.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:28:54 compute-2 podman[281869]: 2026-01-22 15:28:54.026410099 +0000 UTC m=+0.086215951 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:28:54 compute-2 ceph-mon[77081]: pgmap v3652: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:28:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:28:54 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:28:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:28:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:28:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:28:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:54.175+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:55.148+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:55 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:55 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:55 compute-2 ceph-mon[77081]: pgmap v3653: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:55.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:56.165+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:56.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:28:56 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:56 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:56 compute-2 ceph-mon[77081]: pgmap v3654: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:57.170+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:57 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:57 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:28:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:57.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:28:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:28:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:28:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:58.206+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:59.191+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:28:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:59 compute-2 ceph-mon[77081]: pgmap v3655: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:28:59 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:28:59 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:28:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:28:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:28:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:59.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:00.226+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:00 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:00 compute-2 ceph-mon[77081]: pgmap v3656: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:01.207+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:01 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:02.229+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-2 sudo[281901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:02 compute-2 sudo[281901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:02 compute-2 sudo[281901]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:02 compute-2 sudo[281926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:29:02 compute-2 sudo[281926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:02 compute-2 sudo[281926]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:02 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-2 ceph-mon[77081]: pgmap v3657: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:29:02 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:29:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:03.203+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:04.154+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:04 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:04 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:05.157+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:05 compute-2 ceph-mon[77081]: pgmap v3658: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:05 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:06.181+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:06 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:07.142+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:07 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:07 compute-2 ceph-mon[77081]: pgmap v3659: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:07 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:08.128+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:08 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:08 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:09 compute-2 podman[281954]: 2026-01-22 15:29:09.015418441 +0000 UTC m=+0.062031682 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:29:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:09.174+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:10 compute-2 ceph-mon[77081]: pgmap v3660: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:10 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:10.168+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:11 compute-2 sudo[281976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:11 compute-2 sudo[281976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:11 compute-2 sudo[281976]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:11 compute-2 sudo[282001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:11 compute-2 sudo[282001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:11 compute-2 sudo[282001]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:11.217+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:11 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:11 compute-2 ceph-mon[77081]: pgmap v3661: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:11 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:11 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:11.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:12.177+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:12 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:13.135+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:13 compute-2 ceph-mon[77081]: pgmap v3662: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:13 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:14.160+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:14.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:14 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:14 compute-2 ceph-mon[77081]: pgmap v3663: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:14 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:15 compute-2 sshd-session[282027]: Connection closed by 103.100.209.86 port 54228
Jan 22 15:29:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:15.195+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:15 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:16.220+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:16 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:17 compute-2 ceph-mon[77081]: pgmap v3664: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:17 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:17 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:17.238+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:18 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:18.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:18.242+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:19.247+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:19 compute-2 ceph-mon[77081]: pgmap v3665: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:19 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3691489123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:29:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3691489123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:29:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:19.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:20.246+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:20 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:21.278+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:21 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:22 compute-2 ceph-mon[77081]: pgmap v3666: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:22 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:22 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:22.293+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:22 compute-2 ceph-mon[77081]: pgmap v3667: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:22 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:23.248+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:23 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:23 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:29:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:29:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:24.264+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:24 compute-2 ceph-mon[77081]: pgmap v3668: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:24 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:25 compute-2 podman[282036]: 2026-01-22 15:29:25.037341918 +0000 UTC m=+0.094226494 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 15:29:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:25.258+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:25 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:26.227+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:26 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:26 compute-2 ceph-mon[77081]: pgmap v3669: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:26 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:27.246+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:28.252+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:28 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:29.234+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:29 compute-2 ceph-mon[77081]: pgmap v3670: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:29 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:29 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:29.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:30.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:30.238+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:30 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:30 compute-2 ceph-mon[77081]: pgmap v3671: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:30 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:31 compute-2 sudo[282066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:31 compute-2 sudo[282066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:31 compute-2 sudo[282066]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:31 compute-2 sudo[282091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:31 compute-2 sudo[282091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:31 compute-2 sudo[282091]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:31.248+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:31 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:31 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:31.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:32.271+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:33 compute-2 ceph-mon[77081]: pgmap v3672: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:33 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:33.319+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:33.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:34 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:34 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:34.364+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:35.405+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:35 compute-2 sshd-session[282029]: Connection closed by authenticating user root 103.100.209.86 port 54342 [preauth]
Jan 22 15:29:35 compute-2 ceph-mon[77081]: pgmap v3673: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:35 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:36.420+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:36 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:36 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:36 compute-2 ceph-mon[77081]: pgmap v3674: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:36 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:37.433+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:38.416+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:38 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:39.418+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:39.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:40 compute-2 podman[282120]: 2026-01-22 15:29:40.013257141 +0000 UTC m=+0.068643127 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 15:29:40 compute-2 ceph-mon[77081]: pgmap v3675: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:40 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:40.427+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:41.382+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-2 ceph-mon[77081]: pgmap v3676: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:41 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:41.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:42.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:42.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:42 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:43.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:43 compute-2 ceph-mon[77081]: pgmap v3677: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:43 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:43 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6772 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:43 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:43.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:44.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:44 compute-2 ceph-mon[77081]: pgmap v3678: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:44 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:45.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:45 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:46.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:46.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:29:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:29:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:29:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:29:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:29:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:29:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:47.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:47 compute-2 ceph-mon[77081]: pgmap v3679: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:47 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:48.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:48.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:48 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:48 compute-2 ceph-mon[77081]: pgmap v3680: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:48 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:48 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:49.450+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:50.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:50 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:50.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:51 compute-2 ceph-mon[77081]: pgmap v3681: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:51 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:51 compute-2 sudo[282145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:51 compute-2 sudo[282145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:51 compute-2 sudo[282145]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:51 compute-2 sudo[282170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:29:51 compute-2 sudo[282170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:29:51 compute-2 sudo[282170]: pam_unix(sudo:session): session closed for user root
Jan 22 15:29:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:51.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:52.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:52.500+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:52 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:53.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:53 compute-2 ceph-mon[77081]: pgmap v3682: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:53 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:53 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6782 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:53.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:54.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:54 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:54 compute-2 ceph-mon[77081]: pgmap v3683: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:54.594+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:55.548+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:55 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:55.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:56 compute-2 podman[282197]: 2026-01-22 15:29:56.102751675 +0000 UTC m=+0.159681006 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 22 15:29:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:29:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:56.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:29:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:56.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:56 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:56 compute-2 ceph-mon[77081]: pgmap v3684: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:56 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:57.478+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:29:57 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:57.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:58.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:29:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:58.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:58 compute-2 ceph-mon[77081]: pgmap v3685: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:29:58 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:29:58 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:59.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:29:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:59 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:29:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:29:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:29:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:59.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:00.462+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:00 compute-2 ceph-mon[77081]: pgmap v3686: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 15:30:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 15:30:00 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:01.475+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:01.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:02 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:02.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:02 compute-2 sudo[282226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:02 compute-2 sudo[282226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:02 compute-2 sudo[282226]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:02 compute-2 sudo[282251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:30:02 compute-2 sudo[282251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:02 compute-2 sudo[282251]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:02 compute-2 sudo[282276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:02 compute-2 sudo[282276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:02 compute-2 sudo[282276]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:02 compute-2 sudo[282301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:30:02 compute-2 sudo[282301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:03 compute-2 ceph-mon[77081]: pgmap v3687: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:03 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:03 compute-2 sudo[282301]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:03.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:04 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6792 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:04 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:04.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:04.529+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:05 compute-2 ceph-mon[77081]: pgmap v3688: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:30:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:30:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:30:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:30:05 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:30:05 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:05.527+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:05.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:06 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:06 compute-2 ceph-mon[77081]: pgmap v3689: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:06.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:07.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:07 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:07.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:08.609+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:09 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:09 compute-2 ceph-mon[77081]: pgmap v3690: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:09 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:09.634+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:10 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:10.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:10 compute-2 sudo[282360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:10 compute-2 sudo[282360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:10 compute-2 sudo[282360]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:10 compute-2 sudo[282386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:30:10 compute-2 sudo[282386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:10 compute-2 sudo[282386]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:10 compute-2 podman[282384]: 2026-01-22 15:30:10.661294897 +0000 UTC m=+0.078726693 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 15:30:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:10.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:11 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:11 compute-2 ceph-mon[77081]: pgmap v3691: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:30:11 compute-2 sudo[282431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:11 compute-2 sudo[282431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:11 compute-2 sudo[282431]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:11 compute-2 sudo[282456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:11 compute-2 sudo[282456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:11 compute-2 sudo[282456]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:11.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:11.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:12.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:12 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:12.608+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:13 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:13 compute-2 ceph-mon[77081]: pgmap v3692: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:13.649+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:13.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:14.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:14 compute-2 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:30:14 compute-2 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:14.631+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:15 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:15 compute-2 ceph-mon[77081]: pgmap v3693: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:15.679+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:15.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:16.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:16 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:16.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:17 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:17 compute-2 ceph-mon[77081]: pgmap v3694: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:17.711+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:18.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:18.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:18 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:18 compute-2 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:18.680+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:19.685+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:19 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:19 compute-2 ceph-mon[77081]: pgmap v3695: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2581805369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:30:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2581805369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:30:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:20.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:20.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:20 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:20 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:20 compute-2 ceph-mon[77081]: pgmap v3696: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:21.642+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:21 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:22.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:22.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:22.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:23 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:23 compute-2 ceph-mon[77081]: pgmap v3697: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:23.596+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:24.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:24 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:24 compute-2 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:24.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:24.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:25 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:25 compute-2 ceph-mon[77081]: pgmap v3698: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:25.508+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:26.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:26 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:26 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:26.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:27 compute-2 podman[282488]: 2026-01-22 15:30:27.019406027 +0000 UTC m=+0.082301079 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 15:30:27 compute-2 ceph-mon[77081]: pgmap v3699: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:27 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:27.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:28.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:28.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:28 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:28 compute-2 ceph-mon[77081]: pgmap v3700: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:28 compute-2 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:29.483+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:29 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:29 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:30.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:30.435+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:30 compute-2 ceph-mon[77081]: pgmap v3701: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:30 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:31.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:31 compute-2 sudo[282518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:31 compute-2 sudo[282518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:31 compute-2 sudo[282518]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:31 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:31 compute-2 sudo[282543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:31 compute-2 sudo[282543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:31 compute-2 sudo[282543]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:32.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:32.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:32 compute-2 ceph-mon[77081]: pgmap v3702: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:32 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:33.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:34 compute-2 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6822 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:34 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:34.411+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:35 compute-2 ceph-mon[77081]: pgmap v3703: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:35 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:35.391+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:36.342+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:37.376+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:37 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:37 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 65 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 15:30:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:38.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 65 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:38 compute-2 ceph-mon[77081]: pgmap v3704: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:38 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:38 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:38 compute-2 ceph-mon[77081]: pgmap v3705: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:38 compute-2 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6827 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:39.350+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:39 compute-2 ceph-mon[77081]: 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 15:30:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:40.317+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:40 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:40 compute-2 ceph-mon[77081]: pgmap v3706: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:40 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:41 compute-2 podman[282572]: 2026-01-22 15:30:41.018375306 +0000 UTC m=+0.073550077 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:30:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:41.335+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:41 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:42.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:42.333+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:43 compute-2 ceph-mon[77081]: pgmap v3707: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:43 compute-2 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 15:30:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:43.381+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:44 compute-2 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6832 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:44 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:44.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:45 compute-2 ceph-mon[77081]: pgmap v3708: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:45 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:45.359+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:46.396+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:46 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:30:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:30:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:30:47.270 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:30:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:30:47.270 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:30:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:47.418+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:47 compute-2 ceph-mon[77081]: pgmap v3709: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:47 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:48.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:48.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:48.433+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:48 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:48 compute-2 ceph-mon[77081]: pgmap v3710: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:48 compute-2 ceph-mon[77081]: Health check update: 173 slow ops, oldest one blocked for 6837 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:48 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:49.403+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:49 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:30:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:50.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:30:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:50.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:50.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:50 compute-2 ceph-mon[77081]: pgmap v3711: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:50 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:51.467+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:51 compute-2 sudo[282598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:51 compute-2 sudo[282598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:51 compute-2 sudo[282598]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:51 compute-2 sudo[282623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:30:51 compute-2 sudo[282623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:30:51 compute-2 sudo[282623]: pam_unix(sudo:session): session closed for user root
Jan 22 15:30:51 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:52.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:52.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:52.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:53 compute-2 ceph-mon[77081]: pgmap v3712: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:53 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:53.419+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:54 compute-2 ceph-mon[77081]: Health check update: 173 slow ops, oldest one blocked for 6842 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:54 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:54.451+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:55 compute-2 ceph-mon[77081]: pgmap v3713: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:55 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:55.427+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:30:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:30:56 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:56.395+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:57.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:57 compute-2 ceph-mon[77081]: pgmap v3714: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:30:57 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:30:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 15:30:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:58.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 15:30:58 compute-2 podman[282651]: 2026-01-22 15:30:58.050185779 +0000 UTC m=+0.103176131 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:30:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:30:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:30:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:58.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:30:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:58.395+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:58 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:30:58 compute-2 ceph-mon[77081]: Health check update: 173 slow ops, oldest one blocked for 6847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:30:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:59.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:30:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:00.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:00 compute-2 ceph-mon[77081]: pgmap v3715: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:00 compute-2 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 15:31:00 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:00.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:00.385+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:01 compute-2 ceph-mon[77081]: pgmap v3716: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:01 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:01.365+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:02.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:02 compute-2 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 15:31:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:02.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:02.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:03 compute-2 ceph-mon[77081]: pgmap v3717: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:03 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:03.428+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:04 compute-2 ceph-mon[77081]: Health check update: 140 slow ops, oldest one blocked for 6852 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:04 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:04.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:04.444+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:05.436+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:05 compute-2 ceph-mon[77081]: pgmap v3718: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:05 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:06.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:06.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:06.440+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:06 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:06 compute-2 ceph-mon[77081]: pgmap v3719: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:07.398+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:07 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:08.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:08 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:08 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:08.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:08.445+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:09.477+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:09 compute-2 ceph-mon[77081]: pgmap v3720: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:09 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:09 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:10.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:10.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:10 compute-2 sudo[282683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:10 compute-2 sudo[282683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:10 compute-2 sudo[282683]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:10 compute-2 sudo[282708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:31:10 compute-2 sudo[282708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:10 compute-2 sudo[282708]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:10 compute-2 sudo[282733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:10 compute-2 sudo[282733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:10 compute-2 sudo[282733]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:10 compute-2 sudo[282758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:31:10 compute-2 sudo[282758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:11 compute-2 ceph-mon[77081]: pgmap v3721: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:11 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:11 compute-2 podman[282798]: 2026-01-22 15:31:11.280685347 +0000 UTC m=+0.050527537 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:31:11 compute-2 sudo[282758]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:11.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:11 compute-2 sudo[282823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:11 compute-2 sudo[282823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 sudo[282823]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:11 compute-2 sudo[282848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:31:11 compute-2 sudo[282848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 sudo[282848]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:11 compute-2 sudo[282873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:11 compute-2 sudo[282873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 sudo[282873]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:11 compute-2 sudo[282898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:31:11 compute-2 sudo[282898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 sudo[282935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:11 compute-2 sudo[282935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 sudo[282935]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:11 compute-2 sudo[282974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:11 compute-2 sudo[282974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:11 compute-2 sudo[282974]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:12 compute-2 podman[283046]: 2026-01-22 15:31:12.259362158 +0000 UTC m=+0.078510668 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 15:31:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:31:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:31:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:12 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:12.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:12 compute-2 podman[283046]: 2026-01-22 15:31:12.392717196 +0000 UTC m=+0.211865686 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 15:31:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:12.406+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:12 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:13 compute-2 podman[283201]: 2026-01-22 15:31:13.201842882 +0000 UTC m=+0.071606366 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:31:13 compute-2 podman[283201]: 2026-01-22 15:31:13.217017583 +0000 UTC m=+0.086781007 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:31:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:13.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:13 compute-2 ceph-mon[77081]: pgmap v3722: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:13 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:13 compute-2 podman[283264]: 2026-01-22 15:31:13.46070026 +0000 UTC m=+0.057565794 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Jan 22 15:31:13 compute-2 podman[283264]: 2026-01-22 15:31:13.471159737 +0000 UTC m=+0.068025281 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 15:31:13 compute-2 sudo[282898]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:13 compute-2 sudo[283298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:13 compute-2 sudo[283298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:13 compute-2 sudo[283298]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:13 compute-2 sudo[283323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:31:13 compute-2 sudo[283323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:13 compute-2 sudo[283323]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:13 compute-2 sudo[283348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:13 compute-2 sudo[283348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:13 compute-2 sudo[283348]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:13 compute-2 sudo[283373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:31:13 compute-2 sudo[283373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:14 compute-2 sudo[283373]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:14.379+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:14 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:14 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:31:14 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:31:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:15.407+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:15 compute-2 ceph-mon[77081]: pgmap v3723: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:15 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:16.364+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:16 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:16 compute-2 ceph-mon[77081]: pgmap v3724: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:17.410+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:17 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:17 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:18.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:18.368+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:18 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:18 compute-2 ceph-mon[77081]: pgmap v3725: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:18 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:19.339+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:19 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:31:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3953269957' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:31:19 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 15:31:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 15:31:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:20.381+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:20 compute-2 ceph-mon[77081]: pgmap v3726: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:20 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:21.380+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:21 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:31:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:22 compute-2 sudo[283433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:22 compute-2 sudo[283433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:22 compute-2 sudo[283433]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:22 compute-2 sudo[283458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:31:22 compute-2 sudo[283458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:22 compute-2 sudo[283458]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:22.343+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:22.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:22 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:23 compute-2 ceph-mon[77081]: pgmap v3727: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:23 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:23.329+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:24 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:24 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:24.349+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:24.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:25 compute-2 ceph-mon[77081]: pgmap v3728: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:25 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:25.351+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:26.319+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:26 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:26.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:27 compute-2 ceph-mon[77081]: pgmap v3729: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:27 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:27.358+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:27 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:28 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:28.373+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:28.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:29 compute-2 podman[283486]: 2026-01-22 15:31:29.117730441 +0000 UTC m=+0.157516638 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:31:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:29.358+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:29 compute-2 ceph-mon[77081]: pgmap v3730: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:29 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:29 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:30.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:30 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:31.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:31:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:31 compute-2 ceph-mon[77081]: pgmap v3731: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:31 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:32 compute-2 sudo[283514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:32 compute-2 sudo[283514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:32 compute-2 sudo[283514]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:32 compute-2 sudo[283539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:32 compute-2 sudo[283539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:32 compute-2 sudo[283539]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:32.369+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:32 compute-2 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 15:31:32 compute-2 ceph-mon[77081]: pgmap v3732: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:32 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:33.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:33 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:33 compute-2 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6882 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:34.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:34 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:34 compute-2 ceph-mon[77081]: pgmap v3733: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:35.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:35 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:36.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:36 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:36 compute-2 ceph-mon[77081]: pgmap v3734: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:37.377+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:38 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:38.331+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #232. Immutable memtables: 0.
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.418232) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 149] Flushing memtable with next log file: 232
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898418387, "job": 149, "event": "flush_started", "num_memtables": 1, "num_entries": 2759, "num_deletes": 543, "total_data_size": 5144947, "memory_usage": 5238032, "flush_reason": "Manual Compaction"}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 149] Level-0 flush table #233: started
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898437767, "cf_name": "default", "job": 149, "event": "table_file_creation", "file_number": 233, "file_size": 2094155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112981, "largest_seqno": 115734, "table_properties": {"data_size": 2085806, "index_size": 3950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 31336, "raw_average_key_size": 23, "raw_value_size": 2063839, "raw_average_value_size": 1570, "num_data_blocks": 166, "num_entries": 1314, "num_filter_entries": 1314, "num_deletions": 543, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095727, "oldest_key_time": 1769095727, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 149] Flush lasted 19595 microseconds, and 10412 cpu microseconds.
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.437842) [db/flush_job.cc:967] [default] [JOB 149] Level-0 flush table #233: 2094155 bytes OK
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.437865) [db/memtable_list.cc:519] [default] Level-0 commit table #233 started
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440004) [db/memtable_list.cc:722] [default] Level-0 commit table #233: memtable #1 done
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440018) EVENT_LOG_v1 {"time_micros": 1769095898440014, "job": 149, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440038) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 149] Try to delete WAL files size 5131403, prev total WAL file size 5139670, number of live WAL files 2.
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000229.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.441166) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323538' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 150] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 149 Base level 0, inputs: [233(2045KB)], [231(11MB)]
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898441194, "job": 150, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [233], "files_L6": [231], "score": -1, "input_data_size": 14152863, "oldest_snapshot_seqno": -1}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 150] Generated table #234: 14483 keys, 11396376 bytes, temperature: kUnknown
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898538691, "cf_name": "default", "job": 150, "event": "table_file_creation", "file_number": 234, "file_size": 11396376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11318992, "index_size": 40087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 396990, "raw_average_key_size": 27, "raw_value_size": 11074489, "raw_average_value_size": 764, "num_data_blocks": 1444, "num_entries": 14483, "num_filter_entries": 14483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 234, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.538985) [db/compaction/compaction_job.cc:1663] [default] [JOB 150] Compacted 1@0 + 1@6 files to L6 => 11396376 bytes
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.540879) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.0 rd, 116.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.4) OK, records in: 15506, records dropped: 1023 output_compression: NoCompression
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.540899) EVENT_LOG_v1 {"time_micros": 1769095898540890, "job": 150, "event": "compaction_finished", "compaction_time_micros": 97577, "compaction_time_cpu_micros": 29150, "output_level": 6, "num_output_files": 1, "total_output_size": 11396376, "num_input_records": 15506, "num_output_records": 14483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898541468, "job": 150, "event": "table_file_deletion", "file_number": 233}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000231.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898543951, "job": 150, "event": "table_file_deletion", "file_number": 231}
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.441099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:31:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:39.305+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:39 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:39 compute-2 ceph-mon[77081]: pgmap v3735: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:39 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:40.320+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:40.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:40.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:41.301+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:41 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:41 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:41 compute-2 ceph-mon[77081]: pgmap v3736: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:42 compute-2 podman[283570]: 2026-01-22 15:31:42.007008905 +0000 UTC m=+0.064357193 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 15:31:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:42.339+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:42.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:43 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:43 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:43.322+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:43 compute-2 ceph-mon[77081]: pgmap v3737: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:43 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:43 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:44.293+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:44.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:44 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:44 compute-2 ceph-mon[77081]: pgmap v3738: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:44 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:45.270+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:46 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:46.289+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:47.255+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:31:47.270 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:31:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:31:47.271 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:31:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:31:47.271 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:31:47 compute-2 ceph-mon[77081]: pgmap v3739: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:47 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:48.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:48.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:48 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:48 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:49.255+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:49 compute-2 ceph-mon[77081]: pgmap v3740: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:49 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:49 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:50.247+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:50.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:51.282+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:51 compute-2 ceph-mon[77081]: pgmap v3741: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:51 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:52 compute-2 sudo[283595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:52 compute-2 sudo[283595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:52 compute-2 sudo[283595]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:52 compute-2 sudo[283620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:31:52 compute-2 sudo[283620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:31:52 compute-2 sudo[283620]: pam_unix(sudo:session): session closed for user root
Jan 22 15:31:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:52.324+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:52.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:52.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:52 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:52 compute-2 ceph-mon[77081]: pgmap v3742: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:53.348+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:54 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:54 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:54.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:54.394+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:55 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:55 compute-2 ceph-mon[77081]: pgmap v3743: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:55 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:55.439+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:31:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:56.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:31:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:56.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:56 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:57.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:57 compute-2 ceph-mon[77081]: pgmap v3744: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:57 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:31:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:31:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:31:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:31:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:31:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:31:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:58.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:58 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:58 compute-2 ceph-mon[77081]: pgmap v3745: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:31:58 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:31:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:59.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:31:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:31:59 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:00 compute-2 podman[283650]: 2026-01-22 15:32:00.043129356 +0000 UTC m=+0.093573506 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:32:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:00.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:00 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:00 compute-2 ceph-mon[77081]: pgmap v3746: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:01.464+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:02 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:02.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:02.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:03.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:03 compute-2 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:32:03 compute-2 ceph-mon[77081]: pgmap v3747: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:04.412+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:04 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:04 compute-2 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:04 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:05.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:05 compute-2 ceph-mon[77081]: pgmap v3748: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:05 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:06.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:06.481+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:07 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:07 compute-2 ceph-mon[77081]: pgmap v3749: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:07.437+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:08 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:08.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:09 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:09 compute-2 ceph-mon[77081]: pgmap v3750: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:09 compute-2 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:09.344+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:10.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:10.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:10 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:11.382+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:11 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:11 compute-2 ceph-mon[77081]: pgmap v3751: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:11 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:12.334+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:12.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:12 compute-2 sudo[283684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:12 compute-2 sudo[283684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:12 compute-2 sudo[283684]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:12 compute-2 sudo[283710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:12 compute-2 sudo[283710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:12 compute-2 sudo[283710]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:12 compute-2 podman[283708]: 2026-01-22 15:32:12.509744466 +0000 UTC m=+0.094651585 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 15:32:12 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:12 compute-2 ceph-mon[77081]: pgmap v3752: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:13.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:14.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:14.390+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:14 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:14 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:14 compute-2 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:15.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:16 compute-2 ceph-mon[77081]: pgmap v3753: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:16 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:16.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:16.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:17 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:17 compute-2 ceph-mon[77081]: pgmap v3754: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:17.404+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:18.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:32:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:32:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:32:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:32:18 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:19.489+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:20.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:20 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:20.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:20 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:20 compute-2 ceph-mon[77081]: pgmap v3755: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:20 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:20 compute-2 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:32:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:32:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:21.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:22 compute-2 sudo[283758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:22 compute-2 sudo[283758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-2 sudo[283758]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:22 compute-2 sudo[283783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:32:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:22 compute-2 sudo[283783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-2 sudo[283783]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:22.401+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:22 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:22 compute-2 sudo[283808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:22 compute-2 sudo[283808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-2 sudo[283808]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:22 compute-2 sudo[283833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:32:22 compute-2 sudo[283833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:22 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:22 compute-2 ceph-mon[77081]: pgmap v3756: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:23 compute-2 sudo[283833]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:23.441+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:32:23.895 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:32:23 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:32:23.897 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:32:23 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:23 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:23 compute-2 ceph-mon[77081]: pgmap v3757: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:23 compute-2 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:32:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:32:23 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:32:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:24.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:25 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:25 compute-2 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:32:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:32:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:32:25 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:32:25 compute-2 ceph-mon[77081]: pgmap v3758: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:25.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:26.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:26.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:27.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:27 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:28.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:28.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:29.540+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:30 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:30 compute-2 ceph-mon[77081]: pgmap v3759: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:30 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:30 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:30 compute-2 ceph-mon[77081]: pgmap v3760: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:30 compute-2 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:30.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:30.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:31 compute-2 podman[283893]: 2026-01-22 15:32:31.060208764 +0000 UTC m=+0.112336837 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:32:31 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:31 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:31 compute-2 ceph-mon[77081]: pgmap v3761: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:31.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:32 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:32.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:32 compute-2 sudo[283921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:32 compute-2 sudo[283921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:32 compute-2 sudo[283921]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:32.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:32 compute-2 sudo[283946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:32 compute-2 sudo[283946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:32 compute-2 sudo[283946]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:32 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:32:32.898 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:32:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:33 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:33 compute-2 ceph-mon[77081]: pgmap v3762: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:33 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:33.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:34 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:34 compute-2 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:34.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:34.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:35 compute-2 sudo[283973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:35 compute-2 sudo[283973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:35 compute-2 sudo[283973]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:35 compute-2 sudo[283998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:32:35 compute-2 sudo[283998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:35 compute-2 sudo[283998]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:35 compute-2 ceph-mon[77081]: pgmap v3763: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:35 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:32:35 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:32:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:35.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:36 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:36.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:37 compute-2 ceph-mon[77081]: pgmap v3764: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:37 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:37.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:38.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:38 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:38 compute-2 ceph-mon[77081]: pgmap v3765: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:39.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:39 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:39 compute-2 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:32:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:32:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:40.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:41 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:41 compute-2 ceph-mon[77081]: pgmap v3766: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:41.549+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:42 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:42 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:42 compute-2 ceph-mon[77081]: pgmap v3767: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:42.562+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:43 compute-2 podman[284026]: 2026-01-22 15:32:43.027222059 +0000 UTC m=+0.087119217 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 15:32:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:43.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:43 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:44.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:44 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:44 compute-2 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:44 compute-2 ceph-mon[77081]: pgmap v3768: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:45.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 15:32:45 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:46.621+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:32:47.271 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:32:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:32:47.272 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:32:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:32:47.272 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:32:47 compute-2 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 15:32:47 compute-2 ceph-mon[77081]: pgmap v3769: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:47.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:48.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:48.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:49 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:49 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:49 compute-2 ceph-mon[77081]: pgmap v3770: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:49.719+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:50.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:50 compute-2 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:50 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:50.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:51.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:52 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:52 compute-2 ceph-mon[77081]: pgmap v3771: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:52.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:52.637+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:52 compute-2 sudo[284050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:52 compute-2 sudo[284050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:52 compute-2 sudo[284050]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:52 compute-2 sudo[284075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:32:52 compute-2 sudo[284075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:32:52 compute-2 sudo[284075]: pam_unix(sudo:session): session closed for user root
Jan 22 15:32:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:53.645+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:53 compute-2 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:32:53 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:53 compute-2 ceph-mon[77081]: pgmap v3772: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:54.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:54.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:54.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:54 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:54 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:54 compute-2 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:54 compute-2 ceph-mon[77081]: pgmap v3773: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:55.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:56 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:56.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:56.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:56.722+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:57 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:57 compute-2 ceph-mon[77081]: pgmap v3774: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:57.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:58 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:32:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:32:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:32:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:32:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:32:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:58.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:32:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:58.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:59 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:32:59 compute-2 ceph-mon[77081]: pgmap v3775: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:32:59 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:32:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:59.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:32:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:00 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:00.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:00.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:01 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:01 compute-2 ceph-mon[77081]: pgmap v3776: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:01 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:01.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:02 compute-2 podman[284105]: 2026-01-22 15:33:02.025345948 +0000 UTC m=+0.087230159 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 15:33:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:02.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:02.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:03 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:03 compute-2 ceph-mon[77081]: pgmap v3777: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:03.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:04.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:04 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:04 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:04 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:04 compute-2 ceph-mon[77081]: pgmap v3778: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:04.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:05.742+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:05 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:06.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:06.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:06.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:07.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:08 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:08 compute-2 ceph-mon[77081]: pgmap v3779: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:08 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:08.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:09.827+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:10 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:10 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:10 compute-2 ceph-mon[77081]: pgmap v3780: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:10 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:10.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:10 compute-2 sshd-session[284135]: Connection closed by authenticating user root 134.209.61.246 port 51068 [preauth]
Jan 22 15:33:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:10.852+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:11 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:11 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:11 compute-2 ceph-mon[77081]: pgmap v3781: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:11 compute-2 sshd-session[284138]: Connection closed by authenticating user root 134.209.61.246 port 51070 [preauth]
Jan 22 15:33:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:11.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:12 compute-2 sshd-session[284140]: Connection closed by authenticating user root 134.209.61.246 port 51076 [preauth]
Jan 22 15:33:12 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:12.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:12 compute-2 sshd-session[284142]: Connection closed by authenticating user root 134.209.61.246 port 51078 [preauth]
Jan 22 15:33:12 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51088 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:12 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51100 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:12 compute-2 sudo[284144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:12 compute-2 sudo[284144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:12 compute-2 sudo[284144]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:12.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:12 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51106 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:12 compute-2 sudo[284169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:12 compute-2 sudo[284169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:12 compute-2 sudo[284169]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51112 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51116 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51124 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51126 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:13 compute-2 ceph-mon[77081]: pgmap v3782: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:13 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:13 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51128 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51132 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51138 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:13.906+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:13 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51146 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:13 compute-2 podman[284195]: 2026-01-22 15:33:13.992463745 +0000 UTC m=+0.051164271 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 22 15:33:14 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51148 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:14 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51162 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:14 compute-2 sshd[169467]: drop connection #0 from [134.209.61.246]:51178 on [38.102.83.5]:22 penalty: failed authentication
Jan 22 15:33:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:14.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:14 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:14 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:14 compute-2 ceph-mon[77081]: pgmap v3783: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:14.918+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:15 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:15.963+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:16.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:16 compute-2 ceph-mon[77081]: pgmap v3784: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:16 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:16.924+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:17 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:17.931+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:18.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:18.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:33:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:33:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:33:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:33:18 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:18 compute-2 ceph-mon[77081]: pgmap v3785: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:33:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:33:18 compute-2 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:18 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:18.949+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:19 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:19.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:20 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:20.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:20.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:21.006+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:21 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:21 compute-2 ceph-mon[77081]: pgmap v3786: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:21 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:21.988+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:22.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:22.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:23 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:23.002+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:23 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:24 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:24.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:24.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:24.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:25 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:25.011+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:25 compute-2 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:33:25 compute-2 ceph-mon[77081]: pgmap v3787: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:25 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:26 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:26.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:26.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:26 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:26 compute-2 ceph-mon[77081]: pgmap v3788: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:26 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:27 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:27.028+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:28 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:28.068+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:28 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-2 ceph-mon[77081]: pgmap v3789: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:28 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 6998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:28 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:28.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:28 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:28.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:29 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:29.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:29 compute-2 ceph-mon[77081]: pgmap v3790: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:29 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:30 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:30.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:30.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:30.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:30 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:30 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:31 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:31.092+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:31 compute-2 ceph-mon[77081]: pgmap v3791: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:31 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:32 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:32.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:32.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:32 compute-2 ceph-mon[77081]: pgmap v3792: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:32 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:33 compute-2 podman[284223]: 2026-01-22 15:33:33.017642965 +0000 UTC m=+0.080342087 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 15:33:33 compute-2 sudo[284237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:33 compute-2 sudo[284237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:33 compute-2 sudo[284237]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:33 compute-2 sudo[284275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:33 compute-2 sudo[284275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:33 compute-2 sudo[284275]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:33 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:33.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:34 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:34 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:34 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:34.094+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:35 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:35.069+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:35 compute-2 ceph-mon[77081]: pgmap v3793: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:35 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:35 compute-2 sudo[284301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:35 compute-2 sudo[284301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-2 sudo[284301]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:35 compute-2 sudo[284326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:33:35 compute-2 sudo[284326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-2 sudo[284326]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:35 compute-2 sudo[284351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:35 compute-2 sudo[284351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-2 sudo[284351]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:35 compute-2 sudo[284376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:33:35 compute-2 sudo[284376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:35 compute-2 sudo[284376]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:36 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:36.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:36 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:33:36 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:33:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:37 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:37.048+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:37 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:37 compute-2 ceph-mon[77081]: pgmap v3794: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:33:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:33:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:33:37 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:33:38 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:38.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:38 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:38 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:38 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:38 compute-2 ceph-mon[77081]: pgmap v3795: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:38.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:39.077+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:39 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:39 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:39 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:40 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:40.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:40.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:40 compute-2 ceph-mon[77081]: pgmap v3796: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:40 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:41 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:41.134+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:41 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:42 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:42.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:42.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:42 compute-2 sudo[284435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:42 compute-2 sudo[284435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:42 compute-2 sudo[284435]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:42 compute-2 sudo[284460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:33:42 compute-2 sudo[284460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:42 compute-2 sudo[284460]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:43 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:43.082+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:43 compute-2 ceph-mon[77081]: pgmap v3797: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:43 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:33:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:33:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:44.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:44.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:44 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:44 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:44 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:45 compute-2 podman[284486]: 2026-01-22 15:33:45.019751871 +0000 UTC m=+0.073915165 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 15:33:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:45.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:45 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:45 compute-2 ceph-mon[77081]: pgmap v3798: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:45 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:46.135+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:46 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:46.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:46 compute-2 ceph-mon[77081]: pgmap v3799: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:46 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:47.174+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:47 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:33:47.273 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:33:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:33:47.273 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:33:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:33:47.273 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:33:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:48.127+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:48 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:48 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:48.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:48.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:49.153+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:49 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:49 compute-2 ceph-mon[77081]: pgmap v3800: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:49 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:49 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:50.126+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:50 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:50 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:51.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:51 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:51 compute-2 ceph-mon[77081]: pgmap v3801: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:51 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:51 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:52.066+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:52 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:52.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:52 compute-2 ceph-mon[77081]: pgmap v3802: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:52 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:53.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:53 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:53 compute-2 sudo[284512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:53 compute-2 sudo[284512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:53 compute-2 sudo[284512]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:53 compute-2 sudo[284537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:33:53 compute-2 sudo[284537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:33:53 compute-2 sudo[284537]: pam_unix(sudo:session): session closed for user root
Jan 22 15:33:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #235. Immutable memtables: 0.
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.067622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 151] Flushing memtable with next log file: 235
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034067688, "job": 151, "event": "flush_started", "num_memtables": 1, "num_entries": 2321, "num_deletes": 736, "total_data_size": 3742318, "memory_usage": 3824416, "flush_reason": "Manual Compaction"}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 151] Level-0 flush table #236: started
Jan 22 15:33:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:54.080+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:54 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034121579, "cf_name": "default", "job": 151, "event": "table_file_creation", "file_number": 236, "file_size": 2442903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 115739, "largest_seqno": 118055, "table_properties": {"data_size": 2434191, "index_size": 4181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 29850, "raw_average_key_size": 21, "raw_value_size": 2411596, "raw_average_value_size": 1777, "num_data_blocks": 178, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 736, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095898, "oldest_key_time": 1769095898, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 151] Flush lasted 54015 microseconds, and 6049 cpu microseconds.
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.121643) [db/flush_job.cc:967] [default] [JOB 151] Level-0 flush table #236: 2442903 bytes OK
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.121667) [db/memtable_list.cc:519] [default] Level-0 commit table #236 started
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.138776) [db/memtable_list.cc:722] [default] Level-0 commit table #236: memtable #1 done
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.138815) EVENT_LOG_v1 {"time_micros": 1769096034138806, "job": 151, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.138840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 151] Try to delete WAL files size 3729811, prev total WAL file size 3729811, number of live WAL files 2.
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000232.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.139940) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035353338' seq:72057594037927935, type:22 .. '6C6F676D0035373931' seq:0, type:0; will stop at (end)
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 152] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 151 Base level 0, inputs: [236(2385KB)], [234(10MB)]
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034140012, "job": 152, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [236], "files_L6": [234], "score": -1, "input_data_size": 13839279, "oldest_snapshot_seqno": -1}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 152] Generated table #237: 14351 keys, 11969715 bytes, temperature: kUnknown
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034362729, "cf_name": "default", "job": 152, "event": "table_file_creation", "file_number": 237, "file_size": 11969715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11891833, "index_size": 40905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 395183, "raw_average_key_size": 27, "raw_value_size": 11648381, "raw_average_value_size": 811, "num_data_blocks": 1472, "num_entries": 14351, "num_filter_entries": 14351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 237, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.363268) [db/compaction/compaction_job.cc:1663] [default] [JOB 152] Compacted 1@0 + 1@6 files to L6 => 11969715 bytes
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.368404) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.1 rd, 53.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 15840, records dropped: 1489 output_compression: NoCompression
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.368461) EVENT_LOG_v1 {"time_micros": 1769096034368440, "job": 152, "event": "compaction_finished", "compaction_time_micros": 222999, "compaction_time_cpu_micros": 35903, "output_level": 6, "num_output_files": 1, "total_output_size": 11969715, "num_input_records": 15840, "num_output_records": 14351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000236.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034370098, "job": 152, "event": "table_file_deletion", "file_number": 236}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000234.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034372674, "job": 152, "event": "table_file_deletion", "file_number": 234}
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.139782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:33:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:54.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:54 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:54 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:55.091+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:55 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:55 compute-2 ceph-mon[77081]: pgmap v3803: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:55 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:55 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:56.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:56 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:56.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:33:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:56.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:33:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:57.101+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:57 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:57 compute-2 ceph-mon[77081]: pgmap v3804: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:57 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:58.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:58 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:33:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:33:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:33:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:33:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:58.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:33:58 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:33:58 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:59.083+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:59 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:33:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:59 compute-2 ceph-mon[77081]: pgmap v3805: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:33:59 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:33:59 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:33:59 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:00.047+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:00 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:00.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:01.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:01 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:01 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:01 compute-2 ceph-mon[77081]: pgmap v3806: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:02.060+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:02 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:02 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:02 compute-2 ceph-mon[77081]: pgmap v3807: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:02 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:03.093+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:03 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:03 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:03 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:04 compute-2 podman[284567]: 2026-01-22 15:34:04.021484176 +0000 UTC m=+0.077767748 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 15:34:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:04.057+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:04 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:04.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:04.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:05.084+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:05 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:05 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:05 compute-2 ceph-mon[77081]: pgmap v3808: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:05 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:06.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:06 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:06.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:07.027+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:07 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:07 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:07 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:07 compute-2 ceph-mon[77081]: pgmap v3809: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:08.067+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:08 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:08.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:08 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:34:08.838 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:34:08 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:34:08.840 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:34:08 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:08 compute-2 ceph-mon[77081]: pgmap v3810: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:08 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:09.050+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:09 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:10.008+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:10 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:10 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:10 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:10.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:10.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:11.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:11 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:34:11.842 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:34:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:11.958+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:11 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:12 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:12 compute-2 ceph-mon[77081]: pgmap v3811: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:12.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:12.913+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:12 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:13 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:13 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:13 compute-2 ceph-mon[77081]: pgmap v3812: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:13 compute-2 sudo[284598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:13 compute-2 sudo[284598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:13 compute-2 sudo[284598]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:13 compute-2 sudo[284623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:13 compute-2 sudo[284623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:13 compute-2 sudo[284623]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:13.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:13 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:14 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:14 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:14.863+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:14 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:15.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:15 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:15 compute-2 ceph-mon[77081]: 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:15 compute-2 ceph-mon[77081]: pgmap v3813: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 22 15:34:15 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:15 compute-2 podman[284649]: 2026-01-22 15:34:15.990556446 +0000 UTC m=+0.054769926 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:34:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:16.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:16.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:16 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:17 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:17 compute-2 ceph-mon[77081]: pgmap v3814: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 15:34:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:17.969+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:17 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:18 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 22 15:34:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:18.986+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:18 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 15:34:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:19 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:19 compute-2 ceph-mon[77081]: pgmap v3815: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Jan 22 15:34:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:34:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/135746434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:34:19 compute-2 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:19 compute-2 ceph-mon[77081]: osdmap e180: 3 total, 3 up, 3 in
Jan 22 15:34:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:19.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:20 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 15:34:20 compute-2 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 15:34:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:20.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:20.958+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:20 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:21 compute-2 ceph-mon[77081]: 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 15:34:21 compute-2 ceph-mon[77081]: pgmap v3817: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 8 op/s
Jan 22 15:34:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:21.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:21 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:22 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:22.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:23.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:23 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:23 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:23 compute-2 ceph-mon[77081]: pgmap v3818: 305 pgs: 2 active+clean+laggy, 303 active+clean; 913 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 30 op/s
Jan 22 15:34:23 compute-2 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:34:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:24.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:24 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:24 compute-2 ceph-mon[77081]: Health check update: 55 slow ops, oldest one blocked for 7053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:24 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:25.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:25 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:25 compute-2 ceph-mon[77081]: pgmap v3819: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 45 op/s
Jan 22 15:34:25 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:26.050+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:26 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:26 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:26.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:26.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:27.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:27 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:27 compute-2 ceph-mon[77081]: pgmap v3820: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 22 15:34:27 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:28.008+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:28 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:28.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:29.056+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:29 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:29 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:29 compute-2 ceph-mon[77081]: pgmap v3821: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 22 15:34:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:29 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:29 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:30.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:30 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 15:34:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:30.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:31.012+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:31 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:32.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:32 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:32.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:32 compute-2 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 15:34:32 compute-2 ceph-mon[77081]: pgmap v3822: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 22 15:34:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:33.046+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:33 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:33 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:33 compute-2 ceph-mon[77081]: pgmap v3823: 305 pgs: 2 active+clean+laggy, 303 active+clean; 928 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 22 15:34:33 compute-2 sudo[284678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:33 compute-2 sudo[284678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:33 compute-2 sudo[284678]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:33 compute-2 sudo[284703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:33 compute-2 sudo[284703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:33 compute-2 sudo[284703]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:34.081+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:34 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:34 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:34 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:34:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1083032130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:34:34 compute-2 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 7063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:35 compute-2 podman[284728]: 2026-01-22 15:34:35.078240817 +0000 UTC m=+0.130461648 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:34:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:35.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:35 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:35 compute-2 ceph-mon[77081]: pgmap v3824: 305 pgs: 2 active+clean+laggy, 303 active+clean; 913 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 820 KiB/s wr, 27 op/s
Jan 22 15:34:35 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:35 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:36.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:36 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:36.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:36.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:37.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:37 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:37 compute-2 ceph-mon[77081]: pgmap v3825: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 682 B/s wr, 18 op/s
Jan 22 15:34:37 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:38.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:38 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:38.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:38 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:38 compute-2 ceph-mon[77081]: pgmap v3826: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:34:38 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:39.127+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:39 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:40 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:40 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:40.152+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:40 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:34:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:34:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:41.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:41 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:41 compute-2 ceph-mon[77081]: pgmap v3827: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:34:41 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:42.112+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:42 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:42 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:43 compute-2 sudo[284759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:43 compute-2 sudo[284759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-2 sudo[284759]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-2 sudo[284784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:34:43 compute-2 sudo[284784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-2 sudo[284784]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:43.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:43 compute-2 sudo[284809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:43 compute-2 sudo[284809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-2 sudo[284809]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:43 compute-2 sudo[284834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:34:43 compute-2 sudo[284834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:43 compute-2 ceph-mon[77081]: pgmap v3828: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 22 15:34:43 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:43 compute-2 sudo[284834]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:44.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:44.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:44.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:44 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:34:44 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:44 compute-2 ceph-mon[77081]: pgmap v3829: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 597 B/s wr, 8 op/s
Jan 22 15:34:44 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:45.138+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:34:45 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:34:45 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:46 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:46.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:46.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:46 compute-2 ceph-mon[77081]: pgmap v3830: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 22 15:34:46 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:46 compute-2 podman[284892]: 2026-01-22 15:34:46.99704377 +0000 UTC m=+0.052875056 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 15:34:47 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:47.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:34:47.274 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:34:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:34:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:34:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:34:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:34:47 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #238. Immutable memtables: 0.
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.813861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 153] Flushing memtable with next log file: 238
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087813940, "job": 153, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 346, "total_data_size": 1639716, "memory_usage": 1658312, "flush_reason": "Manual Compaction"}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 153] Level-0 flush table #239: started
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087825475, "cf_name": "default", "job": 153, "event": "table_file_creation", "file_number": 239, "file_size": 1076604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 118060, "largest_seqno": 119084, "table_properties": {"data_size": 1071940, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 14126, "raw_average_key_size": 22, "raw_value_size": 1061306, "raw_average_value_size": 1692, "num_data_blocks": 84, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 346, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096034, "oldest_key_time": 1769096034, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 239, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 153] Flush lasted 11669 microseconds, and 6739 cpu microseconds.
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825541) [db/flush_job.cc:967] [default] [JOB 153] Level-0 flush table #239: 1076604 bytes OK
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825569) [db/memtable_list.cc:519] [default] Level-0 commit table #239 started
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.827496) [db/memtable_list.cc:722] [default] Level-0 commit table #239: memtable #1 done
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.827518) EVENT_LOG_v1 {"time_micros": 1769096087827510, "job": 153, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.827544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 153] Try to delete WAL files size 1634075, prev total WAL file size 1634075, number of live WAL files 2.
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000235.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828541) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 154] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 153 Base level 0, inputs: [239(1051KB)], [237(11MB)]
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087828588, "job": 154, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [239], "files_L6": [237], "score": -1, "input_data_size": 13046319, "oldest_snapshot_seqno": -1}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 154] Generated table #240: 14267 keys, 11330775 bytes, temperature: kUnknown
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087914190, "cf_name": "default", "job": 154, "event": "table_file_creation", "file_number": 240, "file_size": 11330775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253743, "index_size": 40247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35717, "raw_key_size": 393619, "raw_average_key_size": 27, "raw_value_size": 11011917, "raw_average_value_size": 771, "num_data_blocks": 1445, "num_entries": 14267, "num_filter_entries": 14267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 240, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.914572) [db/compaction/compaction_job.cc:1663] [default] [JOB 154] Compacted 1@0 + 1@6 files to L6 => 11330775 bytes
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.916349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(22.6) write-amplify(10.5) OK, records in: 14978, records dropped: 711 output_compression: NoCompression
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.916382) EVENT_LOG_v1 {"time_micros": 1769096087916367, "job": 154, "event": "compaction_finished", "compaction_time_micros": 85750, "compaction_time_cpu_micros": 40289, "output_level": 6, "num_output_files": 1, "total_output_size": 11330775, "num_input_records": 14978, "num_output_records": 14267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000239.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087916915, "job": 154, "event": "table_file_deletion", "file_number": 239}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000237.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087921304, "job": 154, "event": "table_file_deletion", "file_number": 237}
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:47 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:34:48 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:48.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:48.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:48 compute-2 ceph-mon[77081]: pgmap v3831: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:48 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:49 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:49.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:49 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:49 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:50 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:50.146+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:50 compute-2 ceph-mon[77081]: pgmap v3832: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:50 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:51 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:51.123+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:51 compute-2 sudo[284914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:51 compute-2 sudo[284914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:51 compute-2 sudo[284914]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:51 compute-2 sudo[284939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:34:51 compute-2 sudo[284939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:51 compute-2 sudo[284939]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:51 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:34:51 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:52 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:52.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:52.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:52.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:53 compute-2 ceph-mon[77081]: pgmap v3833: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:53 compute-2 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:34:53 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:53.078+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:53 compute-2 sudo[284965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:53 compute-2 sudo[284965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:53 compute-2 sudo[284965]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:53 compute-2 sudo[284990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:34:53 compute-2 sudo[284990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:34:53 compute-2 sudo[284990]: pam_unix(sudo:session): session closed for user root
Jan 22 15:34:54 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:54.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:54 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:54 compute-2 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:54.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:55 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:55.054+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:55 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:55 compute-2 ceph-mon[77081]: pgmap v3834: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:55 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:56 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:56.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:56 compute-2 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:34:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:34:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:34:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:57 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:57.052+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:57 compute-2 ceph-mon[77081]: pgmap v3835: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:57 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:58 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:58.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:58 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:58 compute-2 ceph-mon[77081]: pgmap v3836: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:34:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:34:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:34:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:34:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:59.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:59 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:34:59 compute-2 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 7088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:34:59 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:34:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:59.979+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:59 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:34:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:00 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:00 compute-2 ceph-mon[77081]: pgmap v3837: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:00.974+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:00 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:01 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:01.987+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:01 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:02 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:02 compute-2 ceph-mon[77081]: pgmap v3838: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:03.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:03 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:04.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:04 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:04 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:05.057+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:05 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:05 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:05 compute-2 ceph-mon[77081]: pgmap v3839: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:05 compute-2 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:06 compute-2 podman[285021]: 2026-01-22 15:35:06.084203778 +0000 UTC m=+0.133809638 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 15:35:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:06.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:06 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:06 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:06.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:07.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:07 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:07 compute-2 ceph-mon[77081]: pgmap v3840: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:07 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:08.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:08 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:08 compute-2 ceph-mon[77081]: pgmap v3841: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:08.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:09.084+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:09 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:09 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:09 compute-2 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:10.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:10 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:10 compute-2 ceph-mon[77081]: pgmap v3842: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:10 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:11.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:11 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:11 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:12.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:12 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:13.095+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:13 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:13 compute-2 ceph-mon[77081]: pgmap v3843: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:13 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:13 compute-2 sudo[285052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:13 compute-2 sudo[285052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:13 compute-2 sudo[285052]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:13 compute-2 sudo[285077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:13 compute-2 sudo[285077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:13 compute-2 sudo[285077]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:14.107+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:14 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:14 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:14 compute-2 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:15.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:15 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:15 compute-2 ceph-mon[77081]: pgmap v3844: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:15 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:16.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:16 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:16.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:16.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:17.133+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:17 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:17 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:17 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:18 compute-2 podman[285104]: 2026-01-22 15:35:18.037349853 +0000 UTC m=+0.094450441 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:35:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:18.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:18 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:18 compute-2 ceph-mon[77081]: pgmap v3845: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:18 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:35:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:35:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:35:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:35:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:18.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:19.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:19 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:19 compute-2 ceph-mon[77081]: pgmap v3846: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:19 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:35:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:35:19 compute-2 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:20.160+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:20 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:20.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:21.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:21 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:21 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:22.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:22 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-2 ceph-mon[77081]: pgmap v3847: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:22 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:23.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:23 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:23 compute-2 ceph-mon[77081]: pgmap v3848: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:23 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:24.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:24 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:24 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:24 compute-2 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:25.201+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:25 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:25 compute-2 ceph-mon[77081]: pgmap v3849: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:25 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:25 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:26.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:26 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:26 compute-2 ceph-mon[77081]: pgmap v3850: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:26 compute-2 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:35:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:27.259+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:27 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:27 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:28.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:28 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:28 compute-2 ceph-mon[77081]: pgmap v3851: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:28 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:35:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 22K writes, 119K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 22K writes, 22K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1787 writes, 10K keys, 1787 commit groups, 1.0 writes per commit group, ingest: 16.58 MB, 0.03 MB/s
                                           Interval WAL: 1787 writes, 1787 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.4      2.33              0.47        77    0.030       0      0       0.0       0.0
                                             L6      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    110.8     96.0      7.82              2.53        76    0.103    834K    46K       0.0       0.0
                                            Sum      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.9      0.1       0.0   6.9     85.4     86.5     10.15              3.00       153    0.066    834K    46K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     62.0     62.6      1.26              0.26        12    0.105     91K   5756       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    110.8     96.0      7.82              2.53        76    0.103    834K    46K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.5      2.33              0.47        76    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.124, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.86 GB write, 0.12 MB/s write, 0.85 GB read, 0.12 MB/s read, 10.2 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 88.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000746 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4586,83.06 MB,27.3233%) FilterBlock(153,2.27 MB,0.74629%) IndexBlock(153,2.77 MB,0.909996%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:35:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:29.317+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:29 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:29 compute-2 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:29 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:30.322+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:30 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:30 compute-2 ceph-mon[77081]: pgmap v3852: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:30 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:31.281+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:31 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:32 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:32.246+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:32 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:33.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:33 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:34 compute-2 sudo[285131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:34 compute-2 sudo[285131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:34 compute-2 sudo[285131]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:34 compute-2 sudo[285156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:34 compute-2 sudo[285156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:34 compute-2 sudo[285156]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:34.255+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:34 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:34 compute-2 ceph-mon[77081]: pgmap v3853: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:34 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:35.283+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:35 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:35 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:35 compute-2 ceph-mon[77081]: pgmap v3854: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:35 compute-2 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:35 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:35 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:36.278+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:36 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:36 compute-2 ceph-mon[77081]: pgmap v3855: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:36 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:37 compute-2 podman[285182]: 2026-01-22 15:35:37.05040426 +0000 UTC m=+0.103292717 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 15:35:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:37.298+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:37 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:37 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:38.265+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:38 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:38.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:38 compute-2 ceph-mon[77081]: pgmap v3856: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:38 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:39.273+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:39 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:39 compute-2 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:39 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:40.231+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:40 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:40.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:40 compute-2 ceph-mon[77081]: pgmap v3857: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:40 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:41.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:41 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:42.280+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:42 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:42 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:43.289+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:43 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:43 compute-2 ceph-mon[77081]: pgmap v3858: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:43 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:43 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:44.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:44.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:44 compute-2 ceph-mon[77081]: pgmap v3859: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:44 compute-2 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:44 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:45.256+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:45 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:45 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:46.268+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:46 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:46.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:46 compute-2 ceph-mon[77081]: pgmap v3860: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:46 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:47.247+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:47 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:35:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:35:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:35:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:35:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:35:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:35:47 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:48.239+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:48 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:35:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:35:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:48 compute-2 ceph-mon[77081]: pgmap v3861: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:48 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:49 compute-2 podman[285214]: 2026-01-22 15:35:49.000295 +0000 UTC m=+0.063595252 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:35:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:49 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:50 compute-2 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:50 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:50.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:50 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:50.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:35:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:35:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:51.179+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:51 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:51 compute-2 ceph-mon[77081]: pgmap v3862: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:51 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:51 compute-2 sudo[285235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:51 compute-2 sudo[285235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:51 compute-2 sudo[285235]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:51 compute-2 sudo[285260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:35:51 compute-2 sudo[285260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:51 compute-2 sudo[285260]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:51 compute-2 sudo[285285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:51 compute-2 sudo[285285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:51 compute-2 sudo[285285]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:51 compute-2 sudo[285310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:35:51 compute-2 sudo[285310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-2 sudo[285310]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:52.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:52 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:52 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:52 compute-2 sudo[285366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:52 compute-2 sudo[285366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-2 sudo[285366]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-2 sudo[285391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:35:52 compute-2 sudo[285391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-2 sudo[285391]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-2 sudo[285416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:52 compute-2 sudo[285416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-2 sudo[285416]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-2 sudo[285441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 22 15:35:52 compute-2 sudo[285441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:52 compute-2 sudo[285441]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:52.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:53 compute-2 ceph-mon[77081]: pgmap v3863: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:53 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:53 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:53.235+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:53 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:54 compute-2 sudo[285486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:54 compute-2 sudo[285486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:54 compute-2 sudo[285486]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:54 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:54 compute-2 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:35:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:54.240+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:54 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:54 compute-2 sudo[285511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:35:54 compute-2 sudo[285511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:35:54 compute-2 sudo[285511]: pam_unix(sudo:session): session closed for user root
Jan 22 15:35:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:54.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:35:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:54.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:55.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:55 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:55 compute-2 ceph-mon[77081]: pgmap v3864: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:55 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:35:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:35:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:56.253+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:56 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:56.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:57 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:57 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:57.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:58 compute-2 ceph-mon[77081]: pgmap v3865: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:58 compute-2 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:35:58 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:58 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:58.197+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:35:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:35:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:58.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:35:59 compute-2 ceph-mon[77081]: pgmap v3866: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:35:59 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:59 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:59.219+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:35:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:35:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:00 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:00.177+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:00 compute-2 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:00 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:00.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:00 compute-2 sudo[285539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:00 compute-2 sudo[285539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:00 compute-2 sudo[285539]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:01 compute-2 sudo[285565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:36:01 compute-2 sudo[285565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:01 compute-2 sudo[285565]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:01 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:01.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:01 compute-2 ceph-mon[77081]: pgmap v3867: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:01 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:36:01 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:36:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:02.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:02 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:02 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:02 compute-2 ceph-mon[77081]: pgmap v3868: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:02 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:02.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:03.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:03 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:03 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:04.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:04 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:04.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:04 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:04 compute-2 ceph-mon[77081]: pgmap v3869: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:04 compute-2 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:05.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:05 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:06.058+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:06 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:06.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:07.105+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:07 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:07 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:07 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:07 compute-2 ceph-mon[77081]: pgmap v3870: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:07 compute-2 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 15:36:08 compute-2 podman[285593]: 2026-01-22 15:36:08.016753227 +0000 UTC m=+0.076477684 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 15:36:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:08.065+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:08 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:08 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:09.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:09 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:09 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:09 compute-2 ceph-mon[77081]: pgmap v3871: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:09 compute-2 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:10.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:10 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:10 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:10.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:11.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:11 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:11 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:11 compute-2 ceph-mon[77081]: pgmap v3872: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:12.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:12 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:12 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:12 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:12 compute-2 ceph-mon[77081]: pgmap v3873: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:13.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:13 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:13 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:14.064+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:14 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:14 compute-2 sudo[285624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:14 compute-2 sudo[285624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:14 compute-2 sudo[285624]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:14 compute-2 sudo[285649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:14 compute-2 sudo[285649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:14 compute-2 sudo[285649]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:14 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:14 compute-2 ceph-mon[77081]: pgmap v3874: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:14 compute-2 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:15.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:15 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:15 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:16.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:16 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:16 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:16 compute-2 ceph-mon[77081]: pgmap v3875: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:17.092+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:17 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:17 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:18.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:18 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:18 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:18 compute-2 ceph-mon[77081]: pgmap v3876: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:36:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/2621302862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:36:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:19.153+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:19 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:19 compute-2 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:19 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:20 compute-2 podman[285677]: 2026-01-22 15:36:20.01457229 +0000 UTC m=+0.068928503 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 15:36:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:20.111+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:20 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:20.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:20 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:20 compute-2 ceph-mon[77081]: pgmap v3877: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:21.117+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:21 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:22.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:22 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:22 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:23.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:23 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:23 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:23 compute-2 ceph-mon[77081]: pgmap v3878: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:24.161+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:24 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:24.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:24.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:25.121+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:25 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:25 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:25 compute-2 ceph-mon[77081]: pgmap v3879: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:25 compute-2 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:25 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:26.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:26 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:26.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:26.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:26 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:27.116+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:27 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:28 compute-2 ceph-mon[77081]: pgmap v3880: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:28.150+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:28 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:29.200+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:29 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:29 compute-2 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:36:29 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:29 compute-2 ceph-mon[77081]: pgmap v3881: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:29 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:30.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:30 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:30 compute-2 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:30 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:31.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:31 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:36:31 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.5 total, 600.0 interval
                                           Cumulative writes: 14K writes, 44K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                           Cumulative WAL: 14K writes, 4846 syncs, 2.94 writes per sync, written: 0.03 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 636 writes, 1117 keys, 636 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 636 writes, 315 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:36:31 compute-2 ceph-mon[77081]: pgmap v3882: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:31 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:32.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:32 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:33.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:33 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:33 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:33 compute-2 ceph-mon[77081]: pgmap v3883: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:34.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:34 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:34 compute-2 sudo[285703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:34 compute-2 sudo[285703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:34 compute-2 sudo[285703]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:34 compute-2 sudo[285728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:34 compute-2 sudo[285728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:34 compute-2 sudo[285728]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:34 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:34 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:34 compute-2 ceph-mon[77081]: pgmap v3884: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:34 compute-2 ceph-mon[77081]: Health check update: 188 slow ops, oldest one blocked for 7183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:35.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:35 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:36.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:36 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:36 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:37.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:37 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:37 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:37 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:37 compute-2 ceph-mon[77081]: pgmap v3885: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:38.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:38 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:38 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:39 compute-2 podman[285755]: 2026-01-22 15:36:39.064052145 +0000 UTC m=+0.118546532 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 15:36:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:39.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:39 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:39 compute-2 ceph-mon[77081]: pgmap v3886: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:39 compute-2 ceph-mon[77081]: Health check update: 188 slow ops, oldest one blocked for 7188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:40.167+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:40 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:40.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:40 compute-2 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:36:40 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:40 compute-2 ceph-mon[77081]: pgmap v3887: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:41.181+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:41 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:42.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:42 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:42 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:42 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:43.199+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:43 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:43 compute-2 ceph-mon[77081]: pgmap v3888: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:43 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:44.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:36:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:36:44 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:44 compute-2 ceph-mon[77081]: pgmap v3889: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:44 compute-2 ceph-mon[77081]: Health check update: 120 slow ops, oldest one blocked for 7193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:45.251+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:45 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:45 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:45 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:46.232+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:46 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:47 compute-2 ceph-mon[77081]: pgmap v3890: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:47 compute-2 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:36:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:47.213+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:47 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:36:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:36:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:36:47.276 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:36:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:36:47.276 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:36:48 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:48.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:48 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:36:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:36:49 compute-2 ceph-mon[77081]: pgmap v3891: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:49 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:49.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:49 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:50 compute-2 ceph-mon[77081]: Health check update: 120 slow ops, oldest one blocked for 7197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:50 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:50.208+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:50 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:51 compute-2 podman[285788]: 2026-01-22 15:36:51.035400781 +0000 UTC m=+0.087998045 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:36:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:51.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:51.180+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:51 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:51 compute-2 ceph-mon[77081]: pgmap v3892: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:51 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:52.193+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:52 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:52 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:53.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:53 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:53 compute-2 ceph-mon[77081]: pgmap v3893: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:53 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:54.142+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:54 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:54 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:54 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:54 compute-2 sudo[285810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:54 compute-2 sudo[285810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:54 compute-2 sudo[285810]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:54 compute-2 sudo[285835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:36:54 compute-2 sudo[285835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:36:54 compute-2 sudo[285835]: pam_unix(sudo:session): session closed for user root
Jan 22 15:36:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:36:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:55.169+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:55 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:55 compute-2 ceph-mon[77081]: pgmap v3894: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:55 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:56.151+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:56 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:56 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:36:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:36:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:57.163+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:57 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:57 compute-2 ceph-mon[77081]: pgmap v3895: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:57 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:58.147+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:58 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:58 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:36:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:36:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:36:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:59.137+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:59 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:36:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:59 compute-2 ceph-mon[77081]: pgmap v3896: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:36:59 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:36:59 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:36:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:00.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:00 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:00 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:01 compute-2 sudo[285864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:01 compute-2 sudo[285864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:01.134+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:01 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:01 compute-2 sudo[285864]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-2 sudo[285889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:37:01 compute-2 sudo[285889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-2 sudo[285889]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-2 sudo[285914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:01 compute-2 sudo[285914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-2 sudo[285914]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:01 compute-2 sudo[285939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:37:01 compute-2 sudo[285939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:01 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:01 compute-2 ceph-mon[77081]: pgmap v3897: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Jan 22 15:37:02 compute-2 sudo[285939]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:02.178+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:02 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:02 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:02 compute-2 ceph-mon[77081]: pgmap v3898: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 22 15:37:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:03.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:03 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:03 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:04.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:04 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:04 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:04 compute-2 ceph-mon[77081]: pgmap v3899: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 22 15:37:04 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:04 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:37:04 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:37:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:05.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:05 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:06 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:06.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:06 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:07.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:07 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:07 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:07 compute-2 ceph-mon[77081]: pgmap v3900: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 15:37:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:08.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:08 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:08 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:09.036+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:09 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:09 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:09 compute-2 ceph-mon[77081]: pgmap v3901: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 149 op/s
Jan 22 15:37:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:10 compute-2 podman[285999]: 2026-01-22 15:37:10.029191595 +0000 UTC m=+0.081977500 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 15:37:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:10.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:10 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:10 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:10 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:10 compute-2 sudo[286029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:10 compute-2 sudo[286029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:10 compute-2 sudo[286029]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:10 compute-2 sudo[286054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:37:10 compute-2 sudo[286054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:10 compute-2 sudo[286054]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:11.074+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:11 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:11 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:11 compute-2 ceph-mon[77081]: pgmap v3902: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 0 B/s wr, 195 op/s
Jan 22 15:37:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:11 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:37:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:12.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:12 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:12 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:13.062+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:13 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:13 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:13 compute-2 ceph-mon[77081]: pgmap v3903: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 22 15:37:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:14.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:14 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:14 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:14 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:14.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:14 compute-2 sudo[286081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:14 compute-2 sudo[286081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:14 compute-2 sudo[286081]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:14 compute-2 sudo[286106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:14 compute-2 sudo[286106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:14 compute-2 sudo[286106]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:15.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:15.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:15 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:15 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:15 compute-2 ceph-mon[77081]: pgmap v3904: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 163 op/s
Jan 22 15:37:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:16.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:16 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:16 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:16.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:17.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:17 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:17 compute-2 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:37:17 compute-2 ceph-mon[77081]: pgmap v3905: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 0 B/s wr, 143 op/s
Jan 22 15:37:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:18.174+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:18 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:18 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 15:37:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:18.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 15:37:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:19.125+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:19 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:19 compute-2 ceph-mon[77081]: pgmap v3906: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 15:37:19 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:37:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3339096401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:37:19 compute-2 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:20.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:20 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:20 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:20 compute-2 ceph-mon[77081]: pgmap v3907: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 22 15:37:21 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:21.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:21 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:22 compute-2 podman[286135]: 2026-01-22 15:37:22.016148851 +0000 UTC m=+0.079163593 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 15:37:22 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:22.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:22 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:22 compute-2 ceph-mon[77081]: pgmap v3908: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Jan 22 15:37:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:23.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:23.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:23 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:23 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:24.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:24 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:24.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:24 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:24 compute-2 ceph-mon[77081]: pgmap v3909: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:24 compute-2 ceph-mon[77081]: Health check update: 132 slow ops, oldest one blocked for 7232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:25.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:25 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:25 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:26.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:26 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:27.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:27 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:37:27.049 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:37:27 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:37:27.051 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:37:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:28.002+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:28 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:28 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:28 compute-2 ceph-mon[77081]: pgmap v3910: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:29.043+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:29 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:29 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:37:29.053 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:37:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:29 compute-2 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:37:29 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:29 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:29 compute-2 ceph-mon[77081]: pgmap v3911: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:30.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:30 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:30 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:30 compute-2 ceph-mon[77081]: Health check update: 132 slow ops, oldest one blocked for 7237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:31.069+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:31 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:31.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:31 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:31 compute-2 ceph-mon[77081]: pgmap v3912: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:32.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:32 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:32 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:33.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:33 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:33 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:33 compute-2 ceph-mon[77081]: pgmap v3913: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:34.044+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:34 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:34 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:34 compute-2 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:34 compute-2 sudo[286160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:34 compute-2 sudo[286160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:34 compute-2 sudo[286160]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:35.011+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:35 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:35 compute-2 sudo[286186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:35 compute-2 sudo[286186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:35 compute-2 sudo[286186]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:35 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:35 compute-2 ceph-mon[77081]: pgmap v3914: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:35.989+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:35 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:36 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:36 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:36.960+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:37 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:37 compute-2 ceph-mon[77081]: pgmap v3915: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:38 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:38.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:38 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #241. Immutable memtables: 0.
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.471479) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 155] Flushing memtable with next log file: 241
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258471572, "job": 155, "event": "flush_started", "num_memtables": 1, "num_entries": 2752, "num_deletes": 540, "total_data_size": 5172675, "memory_usage": 5270320, "flush_reason": "Manual Compaction"}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 155] Level-0 flush table #242: started
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258494727, "cf_name": "default", "job": 155, "event": "table_file_creation", "file_number": 242, "file_size": 3350889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 119089, "largest_seqno": 121836, "table_properties": {"data_size": 3340601, "index_size": 5693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32349, "raw_average_key_size": 23, "raw_value_size": 3316033, "raw_average_value_size": 2397, "num_data_blocks": 239, "num_entries": 1383, "num_filter_entries": 1383, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096088, "oldest_key_time": 1769096088, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 242, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 155] Flush lasted 23273 microseconds, and 7905 cpu microseconds.
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494775) [db/flush_job.cc:967] [default] [JOB 155] Level-0 flush table #242: 3350889 bytes OK
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494794) [db/memtable_list.cc:519] [default] Level-0 commit table #242 started
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.496819) [db/memtable_list.cc:722] [default] Level-0 commit table #242: memtable #1 done
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.496834) EVENT_LOG_v1 {"time_micros": 1769096258496828, "job": 155, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.496855) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 155] Try to delete WAL files size 5159195, prev total WAL file size 5159195, number of live WAL files 2.
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000238.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.498233) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 156] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 155 Base level 0, inputs: [242(3272KB)], [240(10MB)]
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258498280, "job": 156, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [242], "files_L6": [240], "score": -1, "input_data_size": 14681664, "oldest_snapshot_seqno": -1}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 156] Generated table #243: 14553 keys, 12845888 bytes, temperature: kUnknown
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258596384, "cf_name": "default", "job": 156, "event": "table_file_creation", "file_number": 243, "file_size": 12845888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12765564, "index_size": 42835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 399026, "raw_average_key_size": 27, "raw_value_size": 12517622, "raw_average_value_size": 860, "num_data_blocks": 1556, "num_entries": 14553, "num_filter_entries": 14553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 243, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.596650) [db/compaction/compaction_job.cc:1663] [default] [JOB 156] Compacted 1@0 + 1@6 files to L6 => 12845888 bytes
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.5 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 15650, records dropped: 1097 output_compression: NoCompression
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598533) EVENT_LOG_v1 {"time_micros": 1769096258598526, "job": 156, "event": "compaction_finished", "compaction_time_micros": 98201, "compaction_time_cpu_micros": 44122, "output_level": 6, "num_output_files": 1, "total_output_size": 12845888, "num_input_records": 15650, "num_output_records": 14553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000242.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258599137, "job": 156, "event": "table_file_deletion", "file_number": 242}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000240.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258600913, "job": 156, "event": "table_file_deletion", "file_number": 240}
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.498113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:37:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:38.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:38 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:38.994+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:39 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:39 compute-2 ceph-mon[77081]: pgmap v3916: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:39 compute-2 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:39 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:39.981+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:40 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:40 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:40.978+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:41 compute-2 podman[286213]: 2026-01-22 15:37:41.087294979 +0000 UTC m=+0.149796012 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 22 15:37:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:41 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:41 compute-2 ceph-mon[77081]: pgmap v3917: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:41 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:41.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:42 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:42.952+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:43 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:43 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:43 compute-2 ceph-mon[77081]: pgmap v3918: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:43 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:43.915+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:44.943+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:45 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:45 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:45 compute-2 ceph-mon[77081]: pgmap v3919: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:45 compute-2 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:45 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:45.903+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:46 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:46 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:46.885+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:47 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:47 compute-2 ceph-mon[77081]: pgmap v3920: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:47 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:37:47.276 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:37:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:37:47.277 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:37:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:37:47.277 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:37:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:47.935+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:47 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:48.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:48 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:48.944+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:49.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:49 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:49 compute-2 ceph-mon[77081]: pgmap v3921: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:49 compute-2 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:49 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:49.953+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:50 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:50 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:50.996+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:51 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:51 compute-2 ceph-mon[77081]: pgmap v3922: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:51 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:51.980+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:52 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:52 compute-2 podman[286245]: 2026-01-22 15:37:52.992255956 +0000 UTC m=+0.051525158 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:37:53 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:53.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:53.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:53 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:53 compute-2 ceph-mon[77081]: pgmap v3923: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:54 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:54.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:54 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:54 compute-2 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:37:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:37:54 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:54.998+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:55 compute-2 sudo[286268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:55 compute-2 sudo[286268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:55 compute-2 sudo[286268]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:37:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:37:55 compute-2 sudo[286293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:37:55 compute-2 sudo[286293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:37:55 compute-2 sudo[286293]: pam_unix(sudo:session): session closed for user root
Jan 22 15:37:56 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:56.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:56 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:56 compute-2 ceph-mon[77081]: pgmap v3924: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:37:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:37:57 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:56.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:37:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:57.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:57 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:57 compute-2 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:37:57 compute-2 ceph-mon[77081]: pgmap v3925: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:58 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:58.023+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:37:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:58.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:59 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:37:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:59.014+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:37:59 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:37:59 compute-2 ceph-mon[77081]: pgmap v3926: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:37:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:37:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:37:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:37:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:00 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:00.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:00.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:00 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:00 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:00 compute-2 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:01 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:01.024+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:01 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 22 15:38:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:01 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:01 compute-2 ceph-mon[77081]: pgmap v3927: 305 pgs: 2 active+clean+laggy, 303 active+clean; 894 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.0 MiB/s wr, 7 op/s
Jan 22 15:38:01 compute-2 ceph-mon[77081]: osdmap e181: 3 total, 3 up, 3 in
Jan 22 15:38:01 compute-2 ceph-osd[79779]: osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:01.992+0000 7f47f8ed4640 -1 osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:38:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:38:02 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 22 15:38:02 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:02 compute-2 ceph-mon[77081]: pgmap v3929: 305 pgs: 2 active+clean+laggy, 303 active+clean; 894 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 1.2 MiB/s wr, 9 op/s
Jan 22 15:38:02 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 15:38:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:02.995+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:03 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:03.988+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:04 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:04 compute-2 ceph-mon[77081]: osdmap e182: 3 total, 3 up, 3 in
Jan 22 15:38:04 compute-2 ceph-mon[77081]: 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 15:38:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:04 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:04.958+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:05 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:05 compute-2 ceph-mon[77081]: pgmap v3931: 305 pgs: 2 active+clean+laggy, 303 active+clean; 902 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 20 op/s
Jan 22 15:38:05 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:06 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:06.002+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:06 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:06 compute-2 ceph-mon[77081]: pgmap v3932: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.6 MiB/s wr, 50 op/s
Jan 22 15:38:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:06.963+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:06 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:07.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:07 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:07.955+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:07 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:08.986+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:08 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:09 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:09 compute-2 ceph-mon[77081]: pgmap v3933: 305 pgs: 2 active+clean+laggy, 303 active+clean; 890 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.0 MiB/s wr, 38 op/s
Jan 22 15:38:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 22 15:38:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:09.988+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:10 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:10 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:10 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:10 compute-2 ceph-mon[77081]: osdmap e183: 3 total, 3 up, 3 in
Jan 22 15:38:10 compute-2 sudo[286325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:10 compute-2 sudo[286325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:10 compute-2 sudo[286325]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:10 compute-2 sudo[286350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:38:10 compute-2 sudo[286350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:10 compute-2 sudo[286350]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:10 compute-2 sudo[286375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:10 compute-2 sudo[286375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:10 compute-2 sudo[286375]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:10 compute-2 sudo[286400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:38:10 compute-2 sudo[286400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:11.019+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:11 compute-2 sudo[286400]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:11 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:11 compute-2 ceph-mon[77081]: pgmap v3935: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.0 MiB/s wr, 42 op/s
Jan 22 15:38:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:12.006+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:12 compute-2 podman[286457]: 2026-01-22 15:38:12.042589586 +0000 UTC m=+0.108916176 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 15:38:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:12 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:12 compute-2 ceph-mon[77081]: pgmap v3936: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 879 KiB/s wr, 36 op/s
Jan 22 15:38:12 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:12.989+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:13 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:38:13 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:38:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:13.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:14.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:14.992+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:15 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:15 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:15 compute-2 ceph-mon[77081]: pgmap v3937: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Jan 22 15:38:15 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:15 compute-2 sudo[286486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:15 compute-2 sudo[286486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:15 compute-2 sudo[286486]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:15 compute-2 sudo[286511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:15 compute-2 sudo[286511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:15 compute-2 sudo[286511]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:16.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:16 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:17.020+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:38:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:17.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:38:17 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:17 compute-2 ceph-mon[77081]: pgmap v3938: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 15:38:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:17.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:38:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:38:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:38:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:38:18 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:18 compute-2 ceph-mon[77081]: pgmap v3939: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 4 op/s
Jan 22 15:38:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:38:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:38:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:19.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:19.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:19 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:19 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:19.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:20 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:20 compute-2 ceph-mon[77081]: pgmap v3940: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:38:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:20.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:38:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:20.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:21.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:21 compute-2 sudo[286539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:21 compute-2 sudo[286539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:21 compute-2 sudo[286539]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:21 compute-2 sudo[286564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:38:21 compute-2 sudo[286564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:21 compute-2 sudo[286564]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:21 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:21 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:38:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:21.968+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:22 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:22 compute-2 ceph-mon[77081]: pgmap v3941: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:23.012+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:23 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:23.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:23 compute-2 podman[286590]: 2026-01-22 15:38:23.99862334 +0000 UTC m=+0.054120309 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 15:38:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:24 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:24 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:24 compute-2 ceph-mon[77081]: pgmap v3942: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:24 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:24.938+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:25 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:25.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:26 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:26 compute-2 ceph-mon[77081]: pgmap v3943: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:27.005+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:27 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:28.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:28.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:29.022+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:29 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:29 compute-2 ceph-mon[77081]: pgmap v3944: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:30.051+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:30 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:30 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:30.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:31.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:31 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:31 compute-2 ceph-mon[77081]: pgmap v3945: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:31 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:38:31.236 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:38:31 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:38:31.237 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:38:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:32.028+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:32 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:32.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:33.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:33.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:33 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:33 compute-2 ceph-mon[77081]: pgmap v3946: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:34.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:34 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:34 compute-2 ceph-mon[77081]: pgmap v3947: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:34 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:34.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:35.053+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:35 compute-2 sudo[286615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:35 compute-2 sudo[286615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:35 compute-2 sudo[286615]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:35 compute-2 sudo[286640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:35 compute-2 sudo[286640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:35 compute-2 sudo[286640]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:35 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:36.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:36 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:38:36.239 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:38:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:37.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:37 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:37 compute-2 ceph-mon[77081]: pgmap v3948: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:38.009+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:38 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:38 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:39.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:39 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:39 compute-2 ceph-mon[77081]: pgmap v3949: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:40.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:40 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:40 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:41 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:41.114+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:41 compute-2 ceph-mon[77081]: pgmap v3950: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:41 compute-2 sshd-session[286669]: error: kex_exchange_identification: read: Connection reset by peer
Jan 22 15:38:41 compute-2 sshd-session[286669]: Connection reset by 176.120.22.52 port 60390
Jan 22 15:38:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:42.158+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:42 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:43 compute-2 podman[286670]: 2026-01-22 15:38:43.028093611 +0000 UTC m=+0.087231704 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 15:38:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:43.159+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:43 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:43 compute-2 ceph-mon[77081]: pgmap v3951: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:44.124+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:44 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:44 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 15:38:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 15:38:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:45.092+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:45.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:45 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:45 compute-2 ceph-mon[77081]: pgmap v3952: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:46.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:46 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:46 compute-2 ceph-mon[77081]: pgmap v3953: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:47.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:38:47.278 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:38:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:38:47.278 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:38:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:38:47.278 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:38:47 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:48.058+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:48.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:49.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:38:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:38:49 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:49 compute-2 ceph-mon[77081]: pgmap v3954: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #244. Immutable memtables: 0.
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.718267) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 157] Flushing memtable with next log file: 244
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329718375, "job": 157, "event": "flush_started", "num_memtables": 1, "num_entries": 1305, "num_deletes": 380, "total_data_size": 2117760, "memory_usage": 2160496, "flush_reason": "Manual Compaction"}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 157] Level-0 flush table #245: started
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329734125, "cf_name": "default", "job": 157, "event": "table_file_creation", "file_number": 245, "file_size": 1389488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 121841, "largest_seqno": 123141, "table_properties": {"data_size": 1384040, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16488, "raw_average_key_size": 22, "raw_value_size": 1371300, "raw_average_value_size": 1838, "num_data_blocks": 104, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 380, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096259, "oldest_key_time": 1769096259, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 245, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 157] Flush lasted 15937 microseconds, and 8463 cpu microseconds.
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.734204) [db/flush_job.cc:967] [default] [JOB 157] Level-0 flush table #245: 1389488 bytes OK
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.734234) [db/memtable_list.cc:519] [default] Level-0 commit table #245 started
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.737445) [db/memtable_list.cc:722] [default] Level-0 commit table #245: memtable #1 done
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.737519) EVENT_LOG_v1 {"time_micros": 1769096329737503, "job": 157, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.737558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 157] Try to delete WAL files size 2110885, prev total WAL file size 2110885, number of live WAL files 2.
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000241.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.738788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035373930' seq:72057594037927935, type:22 .. '6C6F676D0036303433' seq:0, type:0; will stop at (end)
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 158] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 157 Base level 0, inputs: [245(1356KB)], [243(12MB)]
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329738872, "job": 158, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [245], "files_L6": [243], "score": -1, "input_data_size": 14235376, "oldest_snapshot_seqno": -1}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 158] Generated table #246: 14520 keys, 14040889 bytes, temperature: kUnknown
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329839267, "cf_name": "default", "job": 158, "event": "table_file_creation", "file_number": 246, "file_size": 14040889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13959194, "index_size": 44270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 398915, "raw_average_key_size": 27, "raw_value_size": 13710225, "raw_average_value_size": 944, "num_data_blocks": 1614, "num_entries": 14520, "num_filter_entries": 14520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 246, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.840831) [db/compaction/compaction_job.cc:1663] [default] [JOB 158] Compacted 1@0 + 1@6 files to L6 => 14040889 bytes
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.842082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.8 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 15299, records dropped: 779 output_compression: NoCompression
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.842115) EVENT_LOG_v1 {"time_micros": 1769096329842099, "job": 158, "event": "compaction_finished", "compaction_time_micros": 101085, "compaction_time_cpu_micros": 35810, "output_level": 6, "num_output_files": 1, "total_output_size": 14040889, "num_input_records": 15299, "num_output_records": 14520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000245.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329844398, "job": 158, "event": "table_file_deletion", "file_number": 245}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000243.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329849041, "job": 158, "event": "table_file_deletion", "file_number": 243}
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.738680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:38:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:50.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:50.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:50 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:50 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:50 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:51.034+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:51 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:51 compute-2 ceph-mon[77081]: pgmap v3955: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:52.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:52.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:52 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:52 compute-2 ceph-mon[77081]: pgmap v3956: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:53.024+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:53.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:53 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:54.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:38:54 compute-2 podman[286703]: 2026-01-22 15:38:54.988694149 +0000 UTC m=+0.053952014 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:38:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:55.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:55 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:55 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:55 compute-2 ceph-mon[77081]: pgmap v3957: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:55 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:38:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:38:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:55.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:38:55 compute-2 sudo[286724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:55 compute-2 sudo[286724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:55 compute-2 sudo[286724]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:55 compute-2 sudo[286749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:38:55 compute-2 sudo[286749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:38:55 compute-2 sudo[286749]: pam_unix(sudo:session): session closed for user root
Jan 22 15:38:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:55.984+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:56 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:56.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:56.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:38:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:57.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:38:57 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:57 compute-2 ceph-mon[77081]: pgmap v3958: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:57.890+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:58.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:58.844+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:59 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:59 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:59 compute-2 ceph-mon[77081]: pgmap v3959: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:38:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:38:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:38:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:59.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:38:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:59.887+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:38:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:38:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:00 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:00 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:00 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:00.923+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:01.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:01 compute-2 ceph-mon[77081]: pgmap v3960: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:01.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:02 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:02 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:02.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:02.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:03 compute-2 ceph-mon[77081]: pgmap v3961: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:03 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:03.847+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:04 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:04.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:04.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:05.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:05 compute-2 ceph-mon[77081]: pgmap v3962: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:05 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:05 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:05.831+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:06 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:06 compute-2 ceph-mon[77081]: pgmap v3963: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:06.803+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:06.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:07.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:07.765+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:07 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:07 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:08.724+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:08.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:09 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:09 compute-2 ceph-mon[77081]: pgmap v3964: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:09.742+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:39:09.916 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 22 15:39:09 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:39:09.917 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 22 15:39:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:10 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:10 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:10 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:10.762+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:10 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:11 compute-2 ceph-mon[77081]: pgmap v3965: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:11 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:11.801+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:12 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:12.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:12.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:13 compute-2 ceph-mon[77081]: pgmap v3966: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:13 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:13.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:13.761+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:14 compute-2 podman[286783]: 2026-01-22 15:39:14.05845642 +0000 UTC m=+0.118858697 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 22 15:39:14 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:14.752+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:14.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:15 compute-2 ceph-mon[77081]: pgmap v3967: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:15 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:15 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:15.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:15.704+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:15 compute-2 sudo[286811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:15 compute-2 sudo[286811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:15 compute-2 sudo[286811]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:15 compute-2 sudo[286836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:15 compute-2 sudo[286836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:15 compute-2 sudo[286836]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:16.746+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:16.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:17 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:17.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:17.779+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:18 compute-2 ceph-mon[77081]: pgmap v3968: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:18 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:18 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:18.805+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:18.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:18 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:39:18.919 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 22 15:39:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:19 compute-2 ceph-mon[77081]: pgmap v3969: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4252721326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:39:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/4252721326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:39:19 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:19.840+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:20.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:20.848+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:21 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:21 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:21 compute-2 ceph-mon[77081]: pgmap v3970: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:21.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:21 compute-2 sudo[286864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:21 compute-2 sudo[286864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-2 sudo[286864]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:21 compute-2 sudo[286889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:39:21 compute-2 sudo[286889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-2 sudo[286889]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:21 compute-2 sudo[286914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:21 compute-2 sudo[286914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-2 sudo[286914]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:21 compute-2 sudo[286939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:39:21 compute-2 sudo[286939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:21.812+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:22 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:22 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:22 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:22 compute-2 sudo[286939]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:22 compute-2 sudo[286995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:22 compute-2 sudo[286995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:22 compute-2 sudo[286995]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:22 compute-2 sudo[287020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:39:22 compute-2 sudo[287020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:22 compute-2 sudo[287020]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:22 compute-2 sudo[287045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:22 compute-2 sudo[287045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:22 compute-2 sudo[287045]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:22 compute-2 sudo[287070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 088fe176-0106-5401-803c-2da38b73b76a -- inventory --format=json-pretty --filter-for-batch
Jan 22 15:39:22 compute-2 sudo[287070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:22.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:22.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:23.049440985 +0000 UTC m=+0.097704239 container create 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:22.982473657 +0000 UTC m=+0.030736941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:39:23 compute-2 systemd[1]: Started libpod-conmon-7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd.scope.
Jan 22 15:39:23 compute-2 systemd[1]: Started libcrun container.
Jan 22 15:39:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:23.319438829 +0000 UTC m=+0.367702083 container init 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:23.326487802 +0000 UTC m=+0.374751086 container start 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:23.331010975 +0000 UTC m=+0.379274259 container attach 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 15:39:23 compute-2 blissful_brattain[287153]: 167 167
Jan 22 15:39:23 compute-2 systemd[1]: libpod-7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd.scope: Deactivated successfully.
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:23.343188008 +0000 UTC m=+0.391451302 container died 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 15:39:23 compute-2 systemd[1]: var-lib-containers-storage-overlay-d6e4c25ea71f036599990632dc70bab84b231a14431ff4efd4e70ae2eb0e70f5-merged.mount: Deactivated successfully.
Jan 22 15:39:23 compute-2 podman[287136]: 2026-01-22 15:39:23.391563719 +0000 UTC m=+0.439826963 container remove 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 15:39:23 compute-2 systemd[1]: libpod-conmon-7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd.scope: Deactivated successfully.
Jan 22 15:39:23 compute-2 ceph-mon[77081]: pgmap v3971: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:23 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:23 compute-2 podman[287178]: 2026-01-22 15:39:23.581434444 +0000 UTC m=+0.065365176 container create 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 15:39:23 compute-2 systemd[1]: Started libpod-conmon-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope.
Jan 22 15:39:23 compute-2 podman[287178]: 2026-01-22 15:39:23.549490352 +0000 UTC m=+0.033421094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 15:39:23 compute-2 systemd[1]: Started libcrun container.
Jan 22 15:39:23 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 15:39:23 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 15:39:23 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 15:39:23 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 15:39:23 compute-2 podman[287178]: 2026-01-22 15:39:23.683665606 +0000 UTC m=+0.167596338 container init 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 15:39:23 compute-2 podman[287178]: 2026-01-22 15:39:23.690574465 +0000 UTC m=+0.174505157 container start 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 15:39:23 compute-2 podman[287178]: 2026-01-22 15:39:23.693641089 +0000 UTC m=+0.177571861 container attach 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 15:39:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:23.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:24.826+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]: [
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:     {
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "available": false,
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "ceph_device": false,
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "lsm_data": {},
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "lvs": [],
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "path": "/dev/sr0",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "rejected_reasons": [
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "Insufficient space (<5GB)",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "Has a FileSystem"
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         ],
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         "sys_api": {
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "actuators": null,
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "device_nodes": "sr0",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "devname": "sr0",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "human_readable_size": "482.00 KB",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "id_bus": "ata",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "model": "QEMU DVD-ROM",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "nr_requests": "2",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "parent": "/dev/sr0",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "partitions": {},
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "path": "/dev/sr0",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "removable": "1",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "rev": "2.5+",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "ro": "0",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "rotational": "1",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "sas_address": "",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "sas_device_handle": "",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "scheduler_mode": "mq-deadline",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "sectors": 0,
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "sectorsize": "2048",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "size": 493568.0,
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "support_discard": "2048",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "type": "disk",
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:             "vendor": "QEMU"
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:         }
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]:     }
Jan 22 15:39:24 compute-2 nostalgic_newton[287195]: ]
Jan 22 15:39:25 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:25 compute-2 ceph-mon[77081]: pgmap v3972: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:25 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:25 compute-2 systemd[1]: libpod-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope: Deactivated successfully.
Jan 22 15:39:25 compute-2 systemd[1]: libpod-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope: Consumed 1.361s CPU time.
Jan 22 15:39:25 compute-2 podman[287178]: 2026-01-22 15:39:25.028786771 +0000 UTC m=+1.512717503 container died 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 15:39:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a-merged.mount: Deactivated successfully.
Jan 22 15:39:25 compute-2 podman[287178]: 2026-01-22 15:39:25.088654376 +0000 UTC m=+1.572585068 container remove 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 15:39:25 compute-2 systemd[1]: libpod-conmon-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope: Deactivated successfully.
Jan 22 15:39:25 compute-2 sudo[287070]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:25 compute-2 podman[288478]: 2026-01-22 15:39:25.143242407 +0000 UTC m=+0.077555029 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:39:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:25.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:26 compute-2 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:39:26 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:26 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:26.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:27.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:27.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:28 compute-2 ceph-mon[77081]: pgmap v3973: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:39:28 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:39:28 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:39:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:28.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:28.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:29 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:29 compute-2 ceph-mon[77081]: pgmap v3974: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:29.934+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:30 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:30 compute-2 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:30.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:30.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:31 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:31 compute-2 ceph-mon[77081]: pgmap v3975: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:31.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:32 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:32 compute-2 ceph-mon[77081]: pgmap v3976: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:32.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:32.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:33 compute-2 sudo[288511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:33 compute-2 sudo[288511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:33 compute-2 sudo[288511]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:33 compute-2 sudo[288536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:39:33 compute-2 sudo[288536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:33 compute-2 sudo[288536]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:33.986+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:34 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:34 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:39:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:34.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:34.960+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:35 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:35 compute-2 ceph-mon[77081]: pgmap v3977: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:35 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:35.924+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:35 compute-2 sudo[288562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:35 compute-2 sudo[288562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:35 compute-2 sudo[288562]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:36 compute-2 sudo[288587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:36 compute-2 sudo[288587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:36 compute-2 sudo[288587]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:36.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:36.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:37 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:37 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:37 compute-2 ceph-mon[77081]: pgmap v3978: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:37.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:38.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:39.002+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:39.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:39 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:39 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:39 compute-2 ceph-mon[77081]: pgmap v3979: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:39.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:40 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:40 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:40.997+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:41.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:41 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:41 compute-2 ceph-mon[77081]: pgmap v3980: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:41.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:42 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:42.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:42.932+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:43.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:43 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:43 compute-2 ceph-mon[77081]: pgmap v3981: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:43.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:44 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:44.944+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:45 compute-2 podman[288616]: 2026-01-22 15:39:45.054109032 +0000 UTC m=+0.103554870 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 15:39:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:45.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:45 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:45 compute-2 ceph-mon[77081]: pgmap v3982: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:45 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:45.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:46 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:46 compute-2 ceph-mon[77081]: pgmap v3983: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:46.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:46.966+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:39:47.279 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:39:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:39:47.279 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:39:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:39:47.280 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:39:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:47.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:47 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:47.948+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:48 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:48 compute-2 ceph-mon[77081]: pgmap v3984: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:48.933+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:49.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:49 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:49.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:50 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:50 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:50 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:50 compute-2 ceph-mon[77081]: pgmap v3985: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:50.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:51.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:51 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:51.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:52 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:52 compute-2 ceph-mon[77081]: pgmap v3986: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:39:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:39:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:52.903+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:53 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:53.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:54 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:54 compute-2 ceph-mon[77081]: pgmap v3987: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:39:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:39:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:54.884+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:39:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:55.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:55 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:39:55 compute-2 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:39:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:55.880+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:55 compute-2 podman[288647]: 2026-01-22 15:39:55.996055899 +0000 UTC m=+0.053387379 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 15:39:56 compute-2 sudo[288667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:56 compute-2 sudo[288667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:56 compute-2 sudo[288667]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:56 compute-2 sudo[288692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:39:56 compute-2 sudo[288692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:39:56 compute-2 sudo[288692]: pam_unix(sudo:session): session closed for user root
Jan 22 15:39:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:56.860+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:56.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:56 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:56 compute-2 ceph-mon[77081]: pgmap v3988: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:57.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:57.857+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:57 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:57 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:58.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:58.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:59 compute-2 ceph-mon[77081]: pgmap v3989: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:39:59 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:39:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:39:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:59.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:39:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:39:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:59.814+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:39:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:00 compute-2 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:00 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:00 compute-2 ceph-mon[77081]: Health detail: HEALTH_WARN 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 15:40:00 compute-2 ceph-mon[77081]: [WRN] SLOW_OPS: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 15:40:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:00.823+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:00.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:01.867+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:02 compute-2 ceph-mon[77081]: pgmap v3990: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:02 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:02.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:02.897+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:03 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:03 compute-2 ceph-mon[77081]: pgmap v3991: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:03 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:03.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:04 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #247. Immutable memtables: 0.
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.827852) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 159] Flushing memtable with next log file: 247
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404827927, "job": 159, "event": "flush_started", "num_memtables": 1, "num_entries": 1349, "num_deletes": 384, "total_data_size": 2279929, "memory_usage": 2306168, "flush_reason": "Manual Compaction"}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 159] Level-0 flush table #248: started
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404839561, "cf_name": "default", "job": 159, "event": "table_file_creation", "file_number": 248, "file_size": 989946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 123146, "largest_seqno": 124490, "table_properties": {"data_size": 985175, "index_size": 1845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16963, "raw_average_key_size": 23, "raw_value_size": 973300, "raw_average_value_size": 1333, "num_data_blocks": 77, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 384, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096330, "oldest_key_time": 1769096330, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 248, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 159] Flush lasted 11760 microseconds, and 6515 cpu microseconds.
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839620) [db/flush_job.cc:967] [default] [JOB 159] Level-0 flush table #248: 989946 bytes OK
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839645) [db/memtable_list.cc:519] [default] Level-0 commit table #248 started
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.841971) [db/memtable_list.cc:722] [default] Level-0 commit table #248: memtable #1 done
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842012) EVENT_LOG_v1 {"time_micros": 1769096404842003, "job": 159, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 159] Try to delete WAL files size 2272850, prev total WAL file size 2272850, number of live WAL files 2.
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000244.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 160] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 159 Base level 0, inputs: [248(966KB)], [246(13MB)]
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404843027, "job": 160, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [248], "files_L6": [246], "score": -1, "input_data_size": 15030835, "oldest_snapshot_seqno": -1}
Jan 22 15:40:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:04.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:04.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 160] Generated table #249: 14499 keys, 11546892 bytes, temperature: kUnknown
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404950079, "cf_name": "default", "job": 160, "event": "table_file_creation", "file_number": 249, "file_size": 11546892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11468961, "index_size": 40570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 398246, "raw_average_key_size": 27, "raw_value_size": 11223898, "raw_average_value_size": 774, "num_data_blocks": 1460, "num_entries": 14499, "num_filter_entries": 14499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 249, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.950489) [db/compaction/compaction_job.cc:1663] [default] [JOB 160] Compacted 1@0 + 1@6 files to L6 => 11546892 bytes
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.952428) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.3 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(26.8) write-amplify(11.7) OK, records in: 15250, records dropped: 751 output_compression: NoCompression
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.952449) EVENT_LOG_v1 {"time_micros": 1769096404952439, "job": 160, "event": "compaction_finished", "compaction_time_micros": 107163, "compaction_time_cpu_micros": 43117, "output_level": 6, "num_output_files": 1, "total_output_size": 11546892, "num_input_records": 15250, "num_output_records": 14499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000248.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404952930, "job": 160, "event": "table_file_deletion", "file_number": 248}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000246.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404955987, "job": 160, "event": "table_file_deletion", "file_number": 246}
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:04 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:05.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:05 compute-2 ceph-mon[77081]: pgmap v3992: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:05 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 7393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:05 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:05.877+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:06 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:06 compute-2 ceph-mon[77081]: pgmap v3993: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:06.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:06.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:40:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:40:07 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:07.959+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:07 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:08 compute-2 ceph-mon[77081]: pgmap v3994: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:08.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:08.963+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:09 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:09 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:09.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:10 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:10 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 7398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:10 compute-2 ceph-mon[77081]: pgmap v3995: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:40:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:40:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:10.994+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:10 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:11 compute-2 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:40:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:11.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:12 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:12 compute-2 ceph-mon[77081]: pgmap v3996: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:12.912+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:13.935+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:14 compute-2 ceph-mon[77081]: pgmap v3997: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:14.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:14 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:14.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:15 compute-2 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 7403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:15.957+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:16 compute-2 podman[288727]: 2026-01-22 15:40:16.068222726 +0000 UTC m=+0.122751093 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 15:40:16 compute-2 sudo[288753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:16 compute-2 sudo[288753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:16 compute-2 sudo[288753]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:16 compute-2 sudo[288778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:16 compute-2 sudo[288778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:16 compute-2 sudo[288778]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:16 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:16 compute-2 ceph-mon[77081]: pgmap v3998: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:16.998+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:17 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:18.001+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:18 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:18 compute-2 ceph-mon[77081]: pgmap v3999: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/18665897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:40:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/18665897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:40:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:18.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:19.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:40:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:40:19 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:19 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:19 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:19 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:19.996+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:20 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:20 compute-2 ceph-mon[77081]: pgmap v4000: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:20.987+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:21.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:21 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:22.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:22.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:22 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:22 compute-2 ceph-mon[77081]: pgmap v4001: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:40:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:40:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:23.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:24 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:24 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:24.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:24 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:25 compute-2 ceph-mon[77081]: pgmap v4002: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:25 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:25 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:25.892+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:26 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:26 compute-2 ceph-mon[77081]: pgmap v4003: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:26 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:26 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:26 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:26.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:26 compute-2 podman[288808]: 2026-01-22 15:40:26.997355346 +0000 UTC m=+0.061008668 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 15:40:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:27 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:27.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:28.882+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:28 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:28 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:28 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:28 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:28 compute-2 ceph-mon[77081]: pgmap v4004: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:29 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:29 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:29.929+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:29 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:30 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:30 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:30 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:30 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:30 compute-2 ceph-mon[77081]: pgmap v4005: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:30.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:31 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:31.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #250. Immutable memtables: 0.
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.968991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 161] Flushing memtable with next log file: 250
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431969028, "job": 161, "event": "flush_started", "num_memtables": 1, "num_entries": 638, "num_deletes": 298, "total_data_size": 728685, "memory_usage": 741224, "flush_reason": "Manual Compaction"}
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 161] Level-0 flush table #251: started
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431973723, "cf_name": "default", "job": 161, "event": "table_file_creation", "file_number": 251, "file_size": 476871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 124495, "largest_seqno": 125128, "table_properties": {"data_size": 473883, "index_size": 831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9239, "raw_average_key_size": 21, "raw_value_size": 467164, "raw_average_value_size": 1076, "num_data_blocks": 36, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 298, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096405, "oldest_key_time": 1769096405, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 251, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 161] Flush lasted 4757 microseconds, and 1762 cpu microseconds.
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.973751) [db/flush_job.cc:967] [default] [JOB 161] Level-0 flush table #251: 476871 bytes OK
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.973764) [db/memtable_list.cc:519] [default] Level-0 commit table #251 started
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975236) [db/memtable_list.cc:722] [default] Level-0 commit table #251: memtable #1 done
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975248) EVENT_LOG_v1 {"time_micros": 1769096431975245, "job": 161, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 161] Try to delete WAL files size 724905, prev total WAL file size 724905, number of live WAL files 2.
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000247.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975686) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 162] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 161 Base level 0, inputs: [251(465KB)], [249(11MB)]
Jan 22 15:40:31 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431975752, "job": 162, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [251], "files_L6": [249], "score": -1, "input_data_size": 12023763, "oldest_snapshot_seqno": -1}
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 162] Generated table #252: 14328 keys, 10215474 bytes, temperature: kUnknown
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432082002, "cf_name": "default", "job": 162, "event": "table_file_creation", "file_number": 252, "file_size": 10215474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139890, "index_size": 38671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 394979, "raw_average_key_size": 27, "raw_value_size": 9899036, "raw_average_value_size": 690, "num_data_blocks": 1380, "num_entries": 14328, "num_filter_entries": 14328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 252, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.082436) [db/compaction/compaction_job.cc:1663] [default] [JOB 162] Compacted 1@0 + 1@6 files to L6 => 10215474 bytes
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.085066) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.1 rd, 96.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(46.6) write-amplify(21.4) OK, records in: 14933, records dropped: 605 output_compression: NoCompression
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.085097) EVENT_LOG_v1 {"time_micros": 1769096432085084, "job": 162, "event": "compaction_finished", "compaction_time_micros": 106337, "compaction_time_cpu_micros": 52621, "output_level": 6, "num_output_files": 1, "total_output_size": 10215474, "num_input_records": 14933, "num_output_records": 14328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000251.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432085438, "job": 162, "event": "table_file_deletion", "file_number": 251}
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000249.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432089433, "job": 162, "event": "table_file_deletion", "file_number": 249}
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:40:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:32.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:32 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:32 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:40:32 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:40:32 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:32 compute-2 ceph-mon[77081]: pgmap v4006: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:32 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:33 compute-2 sudo[288830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:33 compute-2 sudo[288830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-2 sudo[288830]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:33 compute-2 sudo[288855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:40:33 compute-2 sudo[288855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-2 sudo[288855]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:33 compute-2 sudo[288880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:33 compute-2 sudo[288880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-2 sudo[288880]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:33 compute-2 sudo[288905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:40:33 compute-2 sudo[288905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:33.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:34 compute-2 sudo[288905]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:34 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:34 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:34 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:34.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:34 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:34 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:34 compute-2 ceph-mon[77081]: pgmap v4007: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:40:34 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:34 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:40:34 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:40:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:35.926+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:36 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:36 compute-2 sudo[288962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:36 compute-2 sudo[288962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:36 compute-2 sudo[288962]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:36 compute-2 sudo[288987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:36 compute-2 sudo[288987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:36 compute-2 sudo[288987]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:36.911+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:36 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:36 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:36 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:37 compute-2 ceph-mon[77081]: pgmap v4008: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:37 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:37.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:38 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:38.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:38 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:38 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:38 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:39.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:39 compute-2 ceph-mon[77081]: pgmap v4009: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:39 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:39.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:40 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:40 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:40.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:40 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:40 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:40 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:40.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:41.888+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:42 compute-2 ceph-mon[77081]: pgmap v4010: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:42 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:42.870+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:42 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:42 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:40:42 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:40:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:43 compute-2 sudo[289016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:43 compute-2 sudo[289016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:43 compute-2 sudo[289016]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:43 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:43 compute-2 ceph-mon[77081]: pgmap v4011: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:43 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:43 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:40:43 compute-2 sudo[289041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:40:43 compute-2 sudo[289041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:43 compute-2 sudo[289041]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:43.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:44.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:44 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:44 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:44 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:44 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:44 compute-2 ceph-mon[77081]: pgmap v4012: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:45.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:45 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:45 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:46.913+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:46 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:46 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:46 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:47 compute-2 podman[289067]: 2026-01-22 15:40:47.090385811 +0000 UTC m=+0.146322527 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:40:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:40:47.280 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:40:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:40:47.281 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:40:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:40:47.282 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:40:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:47 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:47 compute-2 ceph-mon[77081]: pgmap v4013: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:47 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:47.907+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:48 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:48 compute-2 ceph-mon[77081]: pgmap v4014: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:48.868+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:48 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:48 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:48 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:49.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:49 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:49 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:50.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:50 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:50 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:40:50 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:50.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:40:50 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:50 compute-2 ceph-mon[77081]: pgmap v4015: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:50 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:51.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:52 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:52 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:52 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:52.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:53 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:53 compute-2 ceph-mon[77081]: pgmap v4016: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:53.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:53.953+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:54 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:54.915+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:54 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:54 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:54 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:54.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:54 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:40:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:55.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:55 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:55 compute-2 ceph-mon[77081]: pgmap v4017: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:55 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:40:55 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:55.918+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:56 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:56 compute-2 sudo[289098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:56 compute-2 sudo[289098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:56 compute-2 sudo[289098]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:56 compute-2 sudo[289123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:40:56 compute-2 sudo[289123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:40:56 compute-2 sudo[289123]: pam_unix(sudo:session): session closed for user root
Jan 22 15:40:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:56.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:56 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:56 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:56 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:56.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:57 compute-2 ceph-mon[77081]: pgmap v4018: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:57 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:57.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:57 compute-2 podman[289149]: 2026-01-22 15:40:57.97564934 +0000 UTC m=+0.040724333 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 15:40:58 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:58 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:58 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:58 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:58.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:58.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:40:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:40:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:40:59 compute-2 ceph-mon[77081]: pgmap v4019: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:40:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:59.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:40:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:40:59 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:00 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:00 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:00 compute-2 ceph-mon[77081]: pgmap v4020: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:00 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:00 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:00 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:00.947+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:01.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:02 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:02.914+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:02 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:02 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:41:02 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:41:03 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:03 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:03 compute-2 ceph-mon[77081]: pgmap v4021: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:03.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:04 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:04 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:04 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:04 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:04 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:04 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:04.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:05 compute-2 ceph-mon[77081]: pgmap v4022: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:05 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:05.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:06 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:06 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:06 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:06 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:06.971+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:07 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:07 compute-2 ceph-mon[77081]: pgmap v4023: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:07.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:07.999+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:08 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:08 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:08 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:08 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:08.985+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:09.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:10 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:10 compute-2 ceph-mon[77081]: pgmap v4024: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:10 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:10 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:10 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:10.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:11.039+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:11 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:11 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:11 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:11 compute-2 ceph-mon[77081]: pgmap v4025: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:11.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:11.993+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:12 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:12 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:12 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:12 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:12.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:13.025+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:13 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:13 compute-2 ceph-mon[77081]: pgmap v4026: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:13.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:14.072+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:14 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:14 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:14 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:14 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:14.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:15.119+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:15 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:15 compute-2 ceph-mon[77081]: pgmap v4027: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:15 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:16.115+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:16 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:16 compute-2 sudo[289177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:16 compute-2 sudo[289177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:16 compute-2 sudo[289177]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:16 compute-2 sudo[289202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:16 compute-2 sudo[289202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:16 compute-2 sudo[289202]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:16 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:16 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:16 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:16.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:17.147+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:17.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:17 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:17 compute-2 ceph-mon[77081]: pgmap v4028: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:18 compute-2 podman[289228]: 2026-01-22 15:41:18.05962804 +0000 UTC m=+0.095834618 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:41:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:18 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:18 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:18 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:18 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:19.131+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:41:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:41:19 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:19 compute-2 ceph-mon[77081]: pgmap v4029: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1774652085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:41:19 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1774652085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:41:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:20.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:20 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:20 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:20 compute-2 ceph-mon[77081]: pgmap v4030: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:20 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:20 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:41:20 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:20.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:41:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:21.077+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:21 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:22.065+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:22 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:22 compute-2 ceph-mon[77081]: pgmap v4031: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:22 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:22 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:22 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:23.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:23.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:23 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:24.073+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:24 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:24 compute-2 ceph-mon[77081]: pgmap v4032: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:24 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:24 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:24 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:24.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:25.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:41:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:41:25 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:25 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:26.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:27.061+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:27.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:27 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:27 compute-2 ceph-mon[77081]: pgmap v4033: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:28.016+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:28 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:28 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:28 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:29 compute-2 podman[289259]: 2026-01-22 15:41:29.048629394 +0000 UTC m=+0.103666882 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:41:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:29.057+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:29 compute-2 ceph-mon[77081]: pgmap v4034: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:29 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:30.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:30 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:30 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:30 compute-2 ceph-mon[77081]: pgmap v4035: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:31.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:31 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:31.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:32 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:32 compute-2 ceph-mon[77081]: pgmap v4036: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:33.015+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:33.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:34 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:34.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:35.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:35 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:35 compute-2 ceph-mon[77081]: pgmap v4037: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:35 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:36.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:36 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:36 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:36 compute-2 sudo[289282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:36 compute-2 sudo[289282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:36 compute-2 sudo[289282]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:37 compute-2 sudo[289308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:37 compute-2 sudo[289308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:37 compute-2 sudo[289308]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:37.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:41:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:37.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:37 compute-2 ceph-mon[77081]: pgmap v4038: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:37 compute-2 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:41:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 15:41:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 15:41:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:38.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:38 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:39.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:39.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:39 compute-2 ceph-mon[77081]: pgmap v4039: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:39 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:40.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:40 compute-2 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:40 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:40 compute-2 ceph-mon[77081]: pgmap v4040: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:41.056+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:41.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:41 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:42.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:42 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:42 compute-2 ceph-mon[77081]: pgmap v4041: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:43.017+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:43 compute-2 sudo[289336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:43 compute-2 sudo[289336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:43 compute-2 sudo[289336]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:43 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:43 compute-2 sudo[289361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:41:43 compute-2 sudo[289361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:43 compute-2 sudo[289361]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-2 sudo[289386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:44 compute-2 sudo[289386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-2 sudo[289386]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:44.063+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:44 compute-2 sudo[289411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 22 15:41:44 compute-2 sudo[289411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-2 sudo[289411]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-2 sudo[289456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:44 compute-2 sudo[289456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-2 sudo[289456]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-2 sudo[289481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:41:44 compute-2 sudo[289481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-2 sudo[289481]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-2 sudo[289506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:44 compute-2 sudo[289506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-2 sudo[289506]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:44 compute-2 sudo[289531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 22 15:41:44 compute-2 sudo[289531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:44 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:44 compute-2 ceph-mon[77081]: pgmap v4042: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 15:41:44 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 15:41:44 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:45 compute-2 podman[289626]: 2026-01-22 15:41:45.066689132 +0000 UTC m=+0.058301913 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 15:41:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:45.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:45 compute-2 podman[289626]: 2026-01-22 15:41:45.154718116 +0000 UTC m=+0.146330877 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 15:41:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:45 compute-2 podman[289784]: 2026-01-22 15:41:45.85641802 +0000 UTC m=+0.057016948 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:41:45 compute-2 podman[289784]: 2026-01-22 15:41:45.863185494 +0000 UTC m=+0.063784412 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 15:41:46 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:46 compute-2 podman[289850]: 2026-01-22 15:41:46.064973725 +0000 UTC m=+0.048351361 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 22 15:41:46 compute-2 podman[289850]: 2026-01-22 15:41:46.076569222 +0000 UTC m=+0.059946838 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 15:41:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:46.102+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:46 compute-2 sudo[289531]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:46 compute-2 sudo[289884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:46 compute-2 sudo[289884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:46 compute-2 sudo[289884]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:46 compute-2 sudo[289909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:41:46 compute-2 sudo[289909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:46 compute-2 sudo[289909]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:46 compute-2 sudo[289934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:46 compute-2 sudo[289934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:46 compute-2 sudo[289934]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:46 compute-2 sudo[289959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:41:46 compute-2 sudo[289959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:47 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:47 compute-2 ceph-mon[77081]: pgmap v4043: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:47 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:47.101+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:47 compute-2 sudo[289959]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:47.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:41:47.282 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:41:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:41:47.283 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:41:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:41:47.283 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:41:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:48.076+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:48 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:41:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:41:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:41:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:41:48 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:41:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:49.104+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:49 compute-2 podman[290016]: 2026-01-22 15:41:49.124140206 +0000 UTC m=+0.162410681 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:41:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:49 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:49 compute-2 ceph-mon[77081]: pgmap v4044: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:49 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:50.093+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:50 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:50 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:51.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:51 compute-2 ceph-mon[77081]: pgmap v4045: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:51 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:51.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:52.098+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:52 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:53.071+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:53.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:53 compute-2 ceph-mon[77081]: pgmap v4046: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:53 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:53.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:53 compute-2 sudo[290047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:53 compute-2 sudo[290047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:53 compute-2 sudo[290047]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:53 compute-2 sudo[290072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:41:53 compute-2 sudo[290072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:53 compute-2 sudo[290072]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:54.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:54 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:41:54 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:55.042+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:55.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:41:55 compute-2 ceph-mon[77081]: pgmap v4047: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:55 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:41:55 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:56.075+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:56 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:57.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:57 compute-2 sudo[290099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:57 compute-2 sudo[290099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:57 compute-2 sudo[290099]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:57 compute-2 sudo[290124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:41:57 compute-2 sudo[290124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:41:57 compute-2 sudo[290124]: pam_unix(sudo:session): session closed for user root
Jan 22 15:41:57 compute-2 ceph-mon[77081]: pgmap v4048: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:57 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:41:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:41:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:58.125+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:59.126+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:41:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:41:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:41:59 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:41:59 compute-2 ceph-mon[77081]: pgmap v4049: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:41:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:41:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:41:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:00 compute-2 podman[290150]: 2026-01-22 15:42:00.019704422 +0000 UTC m=+0.073151973 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 15:42:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:00.141+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:00 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:00 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:01.100+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:01 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:01 compute-2 ceph-mon[77081]: pgmap v4050: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:01 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:02.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:02 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:02 compute-2 ceph-mon[77081]: pgmap v4051: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:03.134+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:04.171+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:04 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:05.150+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:05 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:05 compute-2 ceph-mon[77081]: pgmap v4052: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:05 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7512 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:06.108+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:06 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:06 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:07.099+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:07 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:07.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:07 compute-2 ceph-mon[77081]: pgmap v4053: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:07 compute-2 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:42:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:07.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:08.145+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:09.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:09.178+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:09 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:09 compute-2 ceph-mon[77081]: pgmap v4054: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:09.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:10.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:10 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:10 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:10 compute-2 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:11.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:11.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:11 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:11 compute-2 ceph-mon[77081]: pgmap v4055: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:12.157+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:13.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:13.191+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:13 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:13 compute-2 ceph-mon[77081]: pgmap v4056: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:13.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:14.160+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:14 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:14 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:15.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:15.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:15 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-2 ceph-mon[77081]: pgmap v4057: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:15 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 7522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:15 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:15.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:16.107+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:16 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:17.149+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:17.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:17 compute-2 sudo[290179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:17 compute-2 sudo[290179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:17 compute-2 sudo[290179]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:17 compute-2 ceph-mon[77081]: pgmap v4058: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:17 compute-2 sudo[290204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:17 compute-2 sudo[290204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:17 compute-2 sudo[290204]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:42:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:18 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:18 compute-2 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:42:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:42:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:42:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:42:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:42:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:19.127+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:19.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:20 compute-2 podman[290230]: 2026-01-22 15:42:20.020247503 +0000 UTC m=+0.083684121 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 22 15:42:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:20.128+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:20 compute-2 ceph-mon[77081]: pgmap v4059: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:42:20 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:42:20 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:21.123+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:21.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:21 compute-2 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 7527 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:21 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:21 compute-2 ceph-mon[77081]: pgmap v4060: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:21 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:21.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:22.117+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:22 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:23.111+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:23 compute-2 ceph-mon[77081]: pgmap v4061: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:23 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:24.161+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:25.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:25.204+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:25.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:25 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:25 compute-2 ceph-mon[77081]: pgmap v4062: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:25 compute-2 ceph-mon[77081]: Health check update: 179 slow ops, oldest one blocked for 7532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:26.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:26 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:26 compute-2 ceph-mon[77081]: pgmap v4063: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:27.211+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:27.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:27 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:28.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:28 compute-2 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:42:28 compute-2 ceph-mon[77081]: pgmap v4064: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:29.192+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:29 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:30.148+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:31 compute-2 podman[290261]: 2026-01-22 15:42:31.001392836 +0000 UTC m=+0.065718766 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:42:31 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:31 compute-2 ceph-mon[77081]: Health check update: 179 slow ops, oldest one blocked for 7537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:31 compute-2 ceph-mon[77081]: pgmap v4065: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:31.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:31.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:32 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:32.181+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:33 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:33 compute-2 ceph-mon[77081]: pgmap v4066: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:33.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:34 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:34 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:34.244+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:35 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:35 compute-2 ceph-mon[77081]: pgmap v4067: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:35 compute-2 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:35.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:36 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:36.233+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:37 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:37 compute-2 ceph-mon[77081]: pgmap v4068: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:37.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:37 compute-2 sudo[290286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:37 compute-2 sudo[290286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:37 compute-2 sudo[290286]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:37 compute-2 sudo[290311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:37 compute-2 sudo[290311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:37 compute-2 sudo[290311]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:38 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:38.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:39.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:39 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:39 compute-2 ceph-mon[77081]: pgmap v4069: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:40.235+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:40 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:40 compute-2 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:41 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:41 compute-2 ceph-mon[77081]: pgmap v4070: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:41.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:42.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:42 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:43.282+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:43 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:43 compute-2 ceph-mon[77081]: pgmap v4071: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:44.332+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:44 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:45.295+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:45 compute-2 ceph-mon[77081]: pgmap v4072: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:45 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:45 compute-2 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:46.258+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:46 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:46 compute-2 ceph-mon[77081]: pgmap v4073: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:47.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:47.237+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:42:47.283 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:42:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:42:47.284 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:42:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:42:47.284 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:42:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:47.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:47 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:48.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:49 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:49 compute-2 ceph-mon[77081]: pgmap v4074: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:49.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:49.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:50 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:50 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:50 compute-2 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:50.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:51 compute-2 podman[290342]: 2026-01-22 15:42:51.061103416 +0000 UTC m=+0.118854170 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 15:42:51 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:51 compute-2 ceph-mon[77081]: pgmap v4075: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:51.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:51.307+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:51.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:52 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:52.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:53 compute-2 ceph-mon[77081]: pgmap v4076: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:53 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:53.285+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:42:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:53.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:42:54 compute-2 sudo[290372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:54 compute-2 sudo[290372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-2 sudo[290372]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:54 compute-2 sudo[290397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:42:54 compute-2 sudo[290397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-2 sudo[290397]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:54 compute-2 sudo[290422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:54 compute-2 sudo[290422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-2 sudo[290422]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:54 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:54 compute-2 sudo[290447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:42:54 compute-2 sudo[290447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:54.317+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:54 compute-2 sudo[290447]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:42:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:55.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:55 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:55 compute-2 ceph-mon[77081]: pgmap v4077: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:42:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:42:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:42:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:42:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:42:55 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:42:55 compute-2 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:42:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:55.357+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:42:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:42:56 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:56.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:57 compute-2 ceph-mon[77081]: pgmap v4078: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:57 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:57.339+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:57 compute-2 sudo[290505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:57 compute-2 sudo[290505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:57 compute-2 sudo[290505]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:57 compute-2 sudo[290530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:42:57 compute-2 sudo[290530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:42:57 compute-2 sudo[290530]: pam_unix(sudo:session): session closed for user root
Jan 22 15:42:58 compute-2 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:42:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:42:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:58.294+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:42:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:42:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:59.270+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:42:59 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:42:59 compute-2 ceph-mon[77081]: pgmap v4079: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:42:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:42:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:42:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:00.298+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:00 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:00 compute-2 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:01.292+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:01 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:01 compute-2 ceph-mon[77081]: pgmap v4080: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:01.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:01 compute-2 sudo[290557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:01 compute-2 sudo[290557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:01 compute-2 sudo[290557]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:01 compute-2 sudo[290588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:43:01 compute-2 sudo[290588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:01 compute-2 sudo[290588]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:02 compute-2 podman[290581]: 2026-01-22 15:43:02.014725573 +0000 UTC m=+0.116145499 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 15:43:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:02.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:02 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:43:02 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:43:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:03.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:03.266+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:03 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:03 compute-2 ceph-mon[77081]: pgmap v4081: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:03.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:04.218+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:04 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:05.195+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:05.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:05 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:05 compute-2 ceph-mon[77081]: pgmap v4082: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:05 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:05.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:06.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:06 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:07.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:07.257+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:07 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:07 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:07 compute-2 ceph-mon[77081]: pgmap v4083: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:08.223+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:08 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:09.242+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:09 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:09 compute-2 ceph-mon[77081]: pgmap v4084: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:10.234+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:10 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:10 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:10 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:10 compute-2 ceph-mon[77081]: pgmap v4085: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:11.281+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:11.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:11 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:12.322+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:13 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:13 compute-2 ceph-mon[77081]: pgmap v4086: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:13.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:14 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:14.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:15.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:15 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:15 compute-2 ceph-mon[77081]: pgmap v4087: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:15 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:16.403+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:17.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:17.413+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:17 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:17 compute-2 ceph-mon[77081]: pgmap v4088: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:17 compute-2 sudo[290634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:17 compute-2 sudo[290634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:17 compute-2 sudo[290634]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:18 compute-2 sudo[290659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:18 compute-2 sudo[290659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:18 compute-2 sudo[290659]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:18.424+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:18 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:18 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3033953739' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:43:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/3033953739' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:43:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:19.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:19.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:19 compute-2 ceph-mon[77081]: pgmap v4089: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:19 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:20.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:20 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:20 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:20 compute-2 ceph-mon[77081]: pgmap v4090: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:21.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:21.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:21 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:21.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:22 compute-2 podman[290686]: 2026-01-22 15:43:22.06462816 +0000 UTC m=+0.113336905 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:43:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:22.386+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:22 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:22 compute-2 ceph-mon[77081]: pgmap v4091: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #253. Immutable memtables: 0.
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.646946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 163] Flushing memtable with next log file: 253
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602646978, "job": 163, "event": "flush_started", "num_memtables": 1, "num_entries": 2756, "num_deletes": 540, "total_data_size": 5068168, "memory_usage": 5144432, "flush_reason": "Manual Compaction"}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 163] Level-0 flush table #254: started
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602666748, "cf_name": "default", "job": 163, "event": "table_file_creation", "file_number": 254, "file_size": 3292028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 125133, "largest_seqno": 127884, "table_properties": {"data_size": 3281741, "index_size": 5692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32521, "raw_average_key_size": 23, "raw_value_size": 3257045, "raw_average_value_size": 2344, "num_data_blocks": 239, "num_entries": 1389, "num_filter_entries": 1389, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096432, "oldest_key_time": 1769096432, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 254, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 163] Flush lasted 19845 microseconds, and 9201 cpu microseconds.
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.666791) [db/flush_job.cc:967] [default] [JOB 163] Level-0 flush table #254: 3292028 bytes OK
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.666809) [db/memtable_list.cc:519] [default] Level-0 commit table #254 started
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669093) [db/memtable_list.cc:722] [default] Level-0 commit table #254: memtable #1 done
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669108) EVENT_LOG_v1 {"time_micros": 1769096602669104, "job": 163, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 163] Try to delete WAL files size 5054660, prev total WAL file size 5054660, number of live WAL files 2.
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000250.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.670725) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 164] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 163 Base level 0, inputs: [254(3214KB)], [252(9976KB)]
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602670787, "job": 164, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [254], "files_L6": [252], "score": -1, "input_data_size": 13507502, "oldest_snapshot_seqno": -1}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 164] Generated table #255: 14620 keys, 11671371 bytes, temperature: kUnknown
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602790201, "cf_name": "default", "job": 164, "event": "table_file_creation", "file_number": 255, "file_size": 11671371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11592337, "index_size": 41353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36613, "raw_key_size": 400558, "raw_average_key_size": 27, "raw_value_size": 11344915, "raw_average_value_size": 775, "num_data_blocks": 1495, "num_entries": 14620, "num_filter_entries": 14620, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 255, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.790825) [db/compaction/compaction_job.cc:1663] [default] [JOB 164] Compacted 1@0 + 1@6 files to L6 => 11671371 bytes
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.792739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.7 rd, 97.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 15717, records dropped: 1097 output_compression: NoCompression
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.792769) EVENT_LOG_v1 {"time_micros": 1769096602792755, "job": 164, "event": "compaction_finished", "compaction_time_micros": 119817, "compaction_time_cpu_micros": 28082, "output_level": 6, "num_output_files": 1, "total_output_size": 11671371, "num_input_records": 15717, "num_output_records": 14620, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000254.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602794828, "job": 164, "event": "table_file_deletion", "file_number": 254}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000252.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602798979, "job": 164, "event": "table_file_deletion", "file_number": 252}
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.670617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:22 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:23.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:23 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:23.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:24.378+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:24 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:24 compute-2 ceph-mon[77081]: pgmap v4092: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:25.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:25.371+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:25.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:25 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:25 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:26.338+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:26 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:26 compute-2 ceph-mon[77081]: pgmap v4093: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:27.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:27.388+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:27.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:27 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:28.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:29 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:29 compute-2 ceph-mon[77081]: pgmap v4094: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:29.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:29.400+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:29.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:30 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:30 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:30 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:30.392+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:31 compute-2 ceph-mon[77081]: pgmap v4095: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:31 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:31.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:31.394+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:31.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:32 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:32.358+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:33 compute-2 podman[290717]: 2026-01-22 15:43:33.028418254 +0000 UTC m=+0.079783928 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:43:33 compute-2 ceph-mon[77081]: pgmap v4096: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:33 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:33.381+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:33.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:34 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:34.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:35 compute-2 ceph-mon[77081]: pgmap v4097: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:35 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:35 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:35.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:35.382+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:35.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:36.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:36 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:37.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:37 compute-2 ceph-mon[77081]: pgmap v4098: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:37 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:38 compute-2 sudo[290740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:38 compute-2 sudo[290740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:38 compute-2 sudo[290740]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:38 compute-2 sudo[290765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:38 compute-2 sudo[290765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:38 compute-2 sudo[290765]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:38.359+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:38 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:39.379+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:39 compute-2 ceph-mon[77081]: pgmap v4099: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:39 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #256. Immutable memtables: 0.
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.131271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 165] Flushing memtable with next log file: 256
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620131398, "job": 165, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 287, "total_data_size": 454806, "memory_usage": 464472, "flush_reason": "Manual Compaction"}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 165] Level-0 flush table #257: started
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620136895, "cf_name": "default", "job": 165, "event": "table_file_creation", "file_number": 257, "file_size": 297704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 127890, "largest_seqno": 128386, "table_properties": {"data_size": 295140, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7222, "raw_average_key_size": 19, "raw_value_size": 289573, "raw_average_value_size": 774, "num_data_blocks": 23, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 287, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096603, "oldest_key_time": 1769096603, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 257, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 165] Flush lasted 5665 microseconds, and 2730 cpu microseconds.
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136948) [db/flush_job.cc:967] [default] [JOB 165] Level-0 flush table #257: 297704 bytes OK
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136970) [db/memtable_list.cc:519] [default] Level-0 commit table #257 started
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138571) [db/memtable_list.cc:722] [default] Level-0 commit table #257: memtable #1 done
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138592) EVENT_LOG_v1 {"time_micros": 1769096620138584, "job": 165, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138614) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 165] Try to delete WAL files size 451641, prev total WAL file size 451641, number of live WAL files 2.
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000253.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.139269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0036303432' seq:72057594037927935, type:22 .. '6C6F676D0036323937' seq:0, type:0; will stop at (end)
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 166] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 165 Base level 0, inputs: [257(290KB)], [255(11MB)]
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620139361, "job": 166, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [257], "files_L6": [255], "score": -1, "input_data_size": 11969075, "oldest_snapshot_seqno": -1}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 166] Generated table #258: 14411 keys, 11804872 bytes, temperature: kUnknown
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620228473, "cf_name": "default", "job": 166, "event": "table_file_creation", "file_number": 258, "file_size": 11804872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11726693, "index_size": 41074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 397059, "raw_average_key_size": 27, "raw_value_size": 11482428, "raw_average_value_size": 796, "num_data_blocks": 1479, "num_entries": 14411, "num_filter_entries": 14411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 258, "seqno_to_time_mapping": "N/A"}}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.228763) [db/compaction/compaction_job.cc:1663] [default] [JOB 166] Compacted 1@0 + 1@6 files to L6 => 11804872 bytes
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.230591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.2 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(79.9) write-amplify(39.7) OK, records in: 14994, records dropped: 583 output_compression: NoCompression
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.230614) EVENT_LOG_v1 {"time_micros": 1769096620230604, "job": 166, "event": "compaction_finished", "compaction_time_micros": 89199, "compaction_time_cpu_micros": 42043, "output_level": 6, "num_output_files": 1, "total_output_size": 11804872, "num_input_records": 14994, "num_output_records": 14411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000257.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620230845, "job": 166, "event": "table_file_deletion", "file_number": 257}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000255.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620233749, "job": 166, "event": "table_file_deletion", "file_number": 255}
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.139145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-2 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 15:43:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:40.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:40 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:40 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:40 compute-2 ceph-mon[77081]: pgmap v4100: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:41.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:41.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:42 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:42.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:43 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:43 compute-2 ceph-mon[77081]: pgmap v4101: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:43.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:43.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:43.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:44 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:44 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:44.402+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:45 compute-2 ceph-mon[77081]: pgmap v4102: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:45 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:45 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:45.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:46.401+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:46 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:47.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:43:47.285 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:43:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:43:47.285 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:43:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:43:47.286 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:43:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:47.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:47 compute-2 ceph-mon[77081]: pgmap v4103: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:47 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:43:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:47.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:43:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:48.384+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:48 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:49.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:49.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:50 compute-2 ceph-mon[77081]: pgmap v4104: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:50 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:50.467+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:51 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:51 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:51 compute-2 ceph-mon[77081]: pgmap v4105: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:51.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:51.492+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:43:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:43:52 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:52.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:53 compute-2 podman[290797]: 2026-01-22 15:43:53.00913944 +0000 UTC m=+0.074117919 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 15:43:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:53 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:53 compute-2 ceph-mon[77081]: pgmap v4106: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:53 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:53.505+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:54 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:54.465+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:43:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:55.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:55 compute-2 ceph-mon[77081]: pgmap v4107: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:55 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:55 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:43:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:56.535+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:56 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:57.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:57.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:57 compute-2 ceph-mon[77081]: pgmap v4108: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:57 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:58 compute-2 sudo[290826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:58 compute-2 sudo[290826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:58 compute-2 sudo[290826]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:58 compute-2 sudo[290851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:43:58 compute-2 sudo[290851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:43:58 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:43:58 compute-2 sudo[290851]: pam_unix(sudo:session): session closed for user root
Jan 22 15:43:58 compute-2 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 15:43:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:58.528+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:58 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:58 compute-2 ceph-mon[77081]: pgmap v4109: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:43:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:43:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:59.500+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:43:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:43:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:43:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:43:59 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:00.547+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:00 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:00 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:00 compute-2 ceph-mon[77081]: pgmap v4110: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:01.532+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:01 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:02 compute-2 sudo[290879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:02 compute-2 sudo[290879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-2 sudo[290879]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-2 sudo[290904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:44:02 compute-2 sudo[290904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-2 sudo[290904]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-2 sudo[290929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:02 compute-2 sudo[290929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-2 sudo[290929]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-2 sudo[290954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:44:02 compute-2 sudo[290954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:02.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:02 compute-2 sudo[290954]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:02 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:02 compute-2 ceph-mon[77081]: pgmap v4111: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:03.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:03.520+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:03 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:44:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:44:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:44:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:44:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:44:03 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:44:04 compute-2 podman[291012]: 2026-01-22 15:44:04.006083187 +0000 UTC m=+0.066741033 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 15:44:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:04.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:05 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:05 compute-2 ceph-mon[77081]: pgmap v4112: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:05.515+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:06 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:06 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:06 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:06.473+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:07 compute-2 ceph-mon[77081]: pgmap v4113: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:07 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:07.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:07 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:08 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:08.472+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:09 compute-2 sudo[291035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:09 compute-2 sudo[291035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:09.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:09 compute-2 sudo[291035]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:09 compute-2 ceph-mon[77081]: pgmap v4114: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:09 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:44:09 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:44:09 compute-2 sudo[291060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:44:09 compute-2 sudo[291060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:09 compute-2 sudo[291060]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:09.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:10.456+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:10 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:10 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:10 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:11.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:11.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:11 compute-2 ceph-mon[77081]: pgmap v4115: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:11 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:12 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:12.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:13.536+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:13 compute-2 ceph-mon[77081]: pgmap v4116: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:13 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:14.526+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:14 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:14 compute-2 ceph-mon[77081]: pgmap v4117: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:15.488+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:15 compute-2 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:15 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:16.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:16 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:16 compute-2 ceph-mon[77081]: pgmap v4118: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:17.459+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:17 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:18.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:18 compute-2 sudo[291089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:18 compute-2 sudo[291089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:18 compute-2 sudo[291089]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:18 compute-2 sudo[291114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:18 compute-2 sudo[291114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:18 compute-2 sudo[291114]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:18 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:18 compute-2 ceph-mon[77081]: pgmap v4119: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1330623188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:44:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1330623188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:44:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:19.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:20 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:20.485+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:21 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:21 compute-2 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:21 compute-2 ceph-mon[77081]: pgmap v4120: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:21.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:21.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:22 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:22.510+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:23.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:23.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:23 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:23 compute-2 ceph-mon[77081]: pgmap v4121: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:23 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:24 compute-2 podman[291142]: 2026-01-22 15:44:24.08659622 +0000 UTC m=+0.134190135 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 15:44:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:24.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:25 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:25 compute-2 ceph-mon[77081]: pgmap v4122: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:25.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:26 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:26 compute-2 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:26.494+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:27 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:27 compute-2 ceph-mon[77081]: pgmap v4123: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:27.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:28 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:28.436+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:29 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:29 compute-2 ceph-mon[77081]: pgmap v4124: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:29.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:30 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:30 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:30 compute-2 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:30.435+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:31.468+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:31.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:32 compute-2 ceph-mon[77081]: pgmap v4125: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:32 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:32.430+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:33 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:33 compute-2 ceph-mon[77081]: pgmap v4126: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:33.395+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:34 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:34.368+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:35 compute-2 podman[291175]: 2026-01-22 15:44:35.008706763 +0000 UTC m=+0.067624907 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 15:44:35 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:35 compute-2 ceph-mon[77081]: pgmap v4127: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:35.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:36 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:36 compute-2 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:36.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:37 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:37 compute-2 ceph-mon[77081]: pgmap v4128: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:37.387+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:37.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:38 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:38 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:38.399+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:38 compute-2 sudo[291196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:38 compute-2 sudo[291196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:38 compute-2 sudo[291196]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:38 compute-2 sudo[291221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:38 compute-2 sudo[291221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:38 compute-2 sudo[291221]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:39 compute-2 ceph-mon[77081]: pgmap v4129: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:39 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:39.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:39.417+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:40 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:40 compute-2 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:40.376+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:41 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:41 compute-2 ceph-mon[77081]: pgmap v4130: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:41.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:42 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:42.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:43.361+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:43 compute-2 ceph-mon[77081]: pgmap v4131: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:43 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:44.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:44 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:44 compute-2 ceph-mon[77081]: pgmap v4132: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:45.362+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:45 compute-2 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:44:45 compute-2 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:46.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:47 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:47 compute-2 ceph-mon[77081]: pgmap v4133: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:44:47.286 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:44:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:44:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:44:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:44:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:44:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:47.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:47.366+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:47.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:48.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:48 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:49.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:49.397+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-2 ceph-mon[77081]: pgmap v4134: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:49 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:50.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:50 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:50 compute-2 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:51.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:51.389+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:51 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:51 compute-2 ceph-mon[77081]: pgmap v4135: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:52.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:52 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:52 compute-2 ceph-mon[77081]: pgmap v4136: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:53.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:53.373+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:53.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:53 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:54.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:54 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:54 compute-2 ceph-mon[77081]: pgmap v4137: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:55 compute-2 podman[291254]: 2026-01-22 15:44:55.126572161 +0000 UTC m=+0.168558863 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 15:44:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:44:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:55.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:55.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:44:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:55.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:44:55 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:55 compute-2 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:44:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:56.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:56 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:56 compute-2 ceph-mon[77081]: pgmap v4138: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:57.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:57.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:57.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:57 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:58 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:58.419+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:58 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:58 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:58 compute-2 sudo[291282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:58 compute-2 sudo[291282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:58 compute-2 sudo[291282]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:58 compute-2 sudo[291307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:44:58 compute-2 sudo[291307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:44:58 compute-2 sudo[291307]: pam_unix(sudo:session): session closed for user root
Jan 22 15:44:58 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:58 compute-2 ceph-mon[77081]: pgmap v4139: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:44:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:44:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:59.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:44:59 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:59.428+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:59 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:44:59 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:44:59 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:44:59 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:44:59 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:59.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:44:59 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:00 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:00 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:00.451+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:00 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:00 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:01 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:01 compute-2 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:01 compute-2 ceph-mon[77081]: pgmap v4140: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:01.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:01 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:01.490+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:01 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:01 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:01 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:01 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:01 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:01.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:02 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:02 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:02.443+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:02 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:02 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-2 ceph-mon[77081]: pgmap v4141: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:03 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:03.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:03 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:03.433+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:03 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:03 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:03 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:03 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:03 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:03.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:04 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:04 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:04.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:04 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:04 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:05 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:05.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:05 compute-2 ceph-mon[77081]: pgmap v4142: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:05 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:05 compute-2 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:05 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:05.418+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:05 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:05 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:05 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:05 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:05 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:06 compute-2 podman[291336]: 2026-01-22 15:45:06.026162179 +0000 UTC m=+0.073433431 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 15:45:06 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:06 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:06.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:06 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:06 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:07.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:07 compute-2 ceph-mon[77081]: pgmap v4143: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:07 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:07 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:07.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:07 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:07 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:07 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:07 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:07 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:08 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:08 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:08.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:08 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:08 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:09 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:09.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:09 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:09 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:09 compute-2 ceph-mon[77081]: pgmap v4144: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:09 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:09 compute-2 sudo[291357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:09 compute-2 sudo[291357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-2 sudo[291357]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:09 compute-2 sudo[291382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 22 15:45:09 compute-2 sudo[291382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-2 sudo[291382]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:09 compute-2 sudo[291407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:09 compute-2 sudo[291407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-2 sudo[291407]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:09 compute-2 sudo[291432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 22 15:45:09 compute-2 sudo[291432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:09 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:09 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:09 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:10 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:10 compute-2 sudo[291432]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:10 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:10.514+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:10 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:10 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:10 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:10 compute-2 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 15:45:10 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:11 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:11.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:11 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:11 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:11 compute-2 ceph-mon[77081]: pgmap v4145: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:11 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:11 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:11 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:11 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:11.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:12 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:12.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:12 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:12 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:12 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:13.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:13 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:13.502+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:13 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:13 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:13 compute-2 ceph-mon[77081]: pgmap v4146: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:13 compute-2 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:13 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:13 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:13 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:14 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:14.493+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:14 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:14 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:14 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:15 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:15.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:15 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:15.543+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:15 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:15 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:15 compute-2 ceph-mon[77081]: pgmap v4147: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:15 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:15 compute-2 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:15 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:15 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:15 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:16 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:16.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:16 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:16 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:16 compute-2 sudo[291491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:16 compute-2 sudo[291491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:16 compute-2 sudo[291491]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:16 compute-2 sudo[291516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 22 15:45:16 compute-2 sudo[291516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:16 compute-2 sudo[291516]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:16 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:45:16 compute-2 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 15:45:16 compute-2 ceph-mon[77081]: pgmap v4148: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:17.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:17 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:17.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:17 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:17 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:17 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:17 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:17 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:17.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:17 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:18 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:18.479+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:18 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:18 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 15:45:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:45:18 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 15:45:18 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:45:18 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:18 compute-2 ceph-mon[77081]: pgmap v4149: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 15:45:18 compute-2 ceph-mon[77081]: from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 15:45:19 compute-2 sudo[291542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:19 compute-2 sudo[291542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:19 compute-2 sudo[291542]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:19 compute-2 sudo[291568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:19 compute-2 sudo[291568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:19 compute-2 sudo[291568]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:19 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:19.439+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:19 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:19 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:19 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:19 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:19 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:20 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:20 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:20 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:20.471+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:20 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:20 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:21 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-2 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:21 compute-2 ceph-mon[77081]: pgmap v4150: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:21 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:21.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:21 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:21 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:21 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:21 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:21 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:22 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:22.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:22 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:22 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:22 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:23 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:23.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:23 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:23 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:23 compute-2 ceph-mon[77081]: pgmap v4151: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:23 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:23 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:23 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:23 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:24 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:24.498+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:24 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:24 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:24 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:25 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:25 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:25.512+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:25 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:25 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:25 compute-2 ceph-mon[77081]: pgmap v4152: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:25 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:25 compute-2 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:25 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:25 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:25 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:26 compute-2 podman[291596]: 2026-01-22 15:45:26.093347551 +0000 UTC m=+0.144379146 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 15:45:26 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:26.653+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:26 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:26 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:27 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:27 compute-2 ceph-mon[77081]: pgmap v4153: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:27 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:27.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:27 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:27 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:27 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:27 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:27 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:28 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:28.738+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:28 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:28 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:28 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:28 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:29 compute-2 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 24K writes, 129K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 24K writes, 24K syncs, 1.00 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1848 writes, 10K keys, 1848 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s
                                           Interval WAL: 1848 writes, 1848 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     56.5      2.41              0.51        83    0.029       0      0       0.0       0.0
                                             L6      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   6.0    111.9     97.1      8.44              2.77        82    0.103    926K    51K       0.0       0.0
                                            Sum      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.9      0.1       0.0   7.0     87.0     88.1     10.86              3.28       165    0.066    926K    51K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    110.5    111.1      0.70              0.28        12    0.059     91K   4912       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   0.0    111.9     97.1      8.44              2.77        82    0.103    926K    51K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     56.5      2.41              0.51        82    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.133, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.93 GB write, 0.12 MB/s write, 0.92 GB read, 0.12 MB/s read, 10.9 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 96.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000626 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5007,90.60 MB,29.8026%) FilterBlock(165,2.52 MB,0.828045%) IndexBlock(165,3.05 MB,1.00344%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 22 15:45:29 compute-2 sshd-session[291623]: Accepted publickey for zuul from 192.168.122.10 port 38638 ssh2: ECDSA SHA256:ZGulYWguNMmFf6ciBfmyHwkPUuqxgPGYTHWq2rryzeI
Jan 22 15:45:29 compute-2 systemd-logind[787]: New session 51 of user zuul.
Jan 22 15:45:29 compute-2 systemd[1]: Started Session 51 of User zuul.
Jan 22 15:45:29 compute-2 sshd-session[291623]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 22 15:45:29 compute-2 sudo[291628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 22 15:45:29 compute-2 sudo[291628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 22 15:45:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:29.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:29 compute-2 ceph-mon[77081]: pgmap v4154: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:29 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:29 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:29.725+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:29 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:29 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:29 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:29 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:29 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:30 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:30 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:30.681+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:30 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:30 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:30 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:30 compute-2 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:30 compute-2 ceph-mon[77081]: from='client.27434 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:30 compute-2 ceph-mon[77081]: pgmap v4155: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:31 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:31.693+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:31 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:31 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:31 compute-2 ceph-mon[77081]: from='client.18522 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:31 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:31 compute-2 ceph-mon[77081]: from='client.27443 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3667014604' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:31 compute-2 ceph-mon[77081]: from='client.18528 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:31 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3055205003' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:31 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:31 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:31 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:31.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:32 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:32.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:32 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:32 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:33 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:33 compute-2 ceph-mon[77081]: from='client.28636 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:33 compute-2 ceph-mon[77081]: pgmap v4156: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:33 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 15:45:33 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/238792465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:33 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:33.627+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:33 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:33 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:33 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:33 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:33 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:33.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:34 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:34 compute-2 ceph-mon[77081]: from='client.28642 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:34 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/238792465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 15:45:34 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:34.656+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:34 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:34 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:35 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:35 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:35 compute-2 ceph-mon[77081]: pgmap v4157: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:35 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:35 compute-2 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:35 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:35.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:35 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:35 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:35 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:35 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:35 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:36 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:36 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:36 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:36 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:36.648+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:36 compute-2 ovs-vsctl[291917]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 15:45:37 compute-2 podman[291945]: 2026-01-22 15:45:37.00928715 +0000 UTC m=+0.052752926 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 15:45:37 compute-2 ceph-mon[77081]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:37 compute-2 ceph-mon[77081]: from='client.27476 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/624135693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:37 compute-2 ceph-mon[77081]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:37 compute-2 ceph-mon[77081]: pgmap v4158: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:37 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3217916333' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:37 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/4183734222' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:37 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:37.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:37 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:37 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:37 compute-2 virtqemud[225907]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 15:45:37 compute-2 virtqemud[225907]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 15:45:37 compute-2 virtqemud[225907]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 15:45:37 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:37 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:37 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.18543 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.27494 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3475687693' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.18555 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3749877864' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1858915271' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3333500352' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1030413829' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1934319290' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:38 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: cache status {prefix=cache status} (starting...)
Jan 22 15:45:38 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: client ls {prefix=client ls} (starting...)
Jan 22 15:45:38 compute-2 lvm[292291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 15:45:38 compute-2 lvm[292291]: VG ceph_vg0 finished
Jan 22 15:45:38 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:38 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:38 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:38.745+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:39 compute-2 sudo[292367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:39 compute-2 sudo[292367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:39 compute-2 sudo[292367]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:39 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 15:45:39 compute-2 sudo[292414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 22 15:45:39 compute-2 sudo[292414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 22 15:45:39 compute-2 sudo[292414]: pam_unix(sudo:session): session closed for user root
Jan 22 15:45:39 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 15:45:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:39.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:39 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 15:45:39 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 15:45:39 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 15:45:39 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1538690967' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:39 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 15:45:39 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:39 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:39 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:39.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:39 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 15:45:39 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:39 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:39 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:40 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 15:45:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:40 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 15:45:40 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: ops {prefix=ops} (starting...)
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.27527 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.18579 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.27533 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1857267379' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: pgmap v4159: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1234990876' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/347329438' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2297523029' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/606502140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/129939776' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3170290335' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/158912215' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 15:45:40 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3339043403' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:40 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:40.815+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:40 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:40 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:40 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 22 15:45:40 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2675750719' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: session ls {prefix=session ls} (starting...)
Jan 22 15:45:41 compute-2 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: status {prefix=status} (starting...)
Jan 22 15:45:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 22 15:45:41 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1892410733' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.28666 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.18606 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.28672 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.18618 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4162337217' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.27569 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1538690967' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1158656698' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2747367764' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1040615603' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2912699497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1628751414' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3603627911' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: pgmap v4160: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.18657 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3339043403' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/226829701' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4136528779' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1073452591' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2675750719' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2938522680' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3865448621' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2501336058' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1892410733' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 15:45:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 22 15:45:41 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1561478504' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 15:45:41 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/195662771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:41 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:41.808+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:41 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:41 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:41 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:41 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 15:45:41 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 15:45:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 15:45:42 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3523237783' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 15:45:42 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/157024880' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.28705 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.27611 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1561478504' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/261441280' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.27626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.18693 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/195662771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2764929686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2047798707' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3523237783' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3457684757' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 15:45:42 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4174554485' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:42 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:42.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:42 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:42 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:42 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 15:45:42 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/928646561' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 22 15:45:43 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4016002785' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 22 15:45:43 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1868857439' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.28747 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.18708 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.27647 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.28771 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.18723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.27665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2917217955' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: pgmap v4161: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/157024880' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/150362007' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4174554485' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2148250249' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/928646561' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1894221034' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4016002785' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1868857439' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3731501614' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1042555888' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 15:45:43 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4271205578' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:43.825+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:43 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:43 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 15:45:43 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4176284565' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:43 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 22 15:45:43 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/52836653' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:43 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:43 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:43 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:43.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 22 15:45:44 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2181286671' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.27677 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.27683 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.27695 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.18774 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.28819 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.27707 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4271205578' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4176284565' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/52836653' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2181286671' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/151730726' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2640204771' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9997> 2026-01-22T15:31:48.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:18.318558+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6964 sent 6963 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:48.211973+0000 osd.2 (osd.2) 6964 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6963) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:47.256511+0000 osd.2 (osd.2) 6963 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6964) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:48.211973+0000 osd.2 (osd.2) 6964 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9984> 2026-01-22T15:31:49.255+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:19.318772+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6965 sent 6964 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:49.257288+0000 osd.2 (osd.2) 6965 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6965) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:49.257288+0000 osd.2 (osd.2) 6965 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,4,1,1,15,34,33,65,24])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9972> 2026-01-22T15:31:50.247+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:20.319015+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6966 sent 6965 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:50.248925+0000 osd.2 (osd.2) 6966 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,4,1,0,16,34,33,65,24])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9962> 2026-01-22T15:31:51.282+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:21.319214+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6967 sent 6966 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:51.283373+0000 osd.2 (osd.2) 6967 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6966) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:50.248925+0000 osd.2 (osd.2) 6966 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:22.319419+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9948> 2026-01-22T15:31:52.324+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735caeec00 session 0x55735a69a960
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c33c400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6967) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:51.283373+0000 osd.2 (osd.2) 6967 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:23.319655+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6968 sent 6967 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:52.325998+0000 osd.2 (osd.2) 6968 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9931> 2026-01-22T15:31:53.348+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:24.319941+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6969 sent 6968 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:53.349831+0000 osd.2 (osd.2) 6969 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6968) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:52.325998+0000 osd.2 (osd.2) 6968 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9920> 2026-01-22T15:31:54.394+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:25.320296+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6970 sent 6969 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:54.395508+0000 osd.2 (osd.2) 6970 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,5,4,1,0,16,34,33,65,24])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6969) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:53.349831+0000 osd.2 (osd.2) 6969 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6970) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:54.395508+0000 osd.2 (osd.2) 6970 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9906> 2026-01-22T15:31:55.439+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:26.320514+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6971 sent 6970 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:55.440542+0000 osd.2 (osd.2) 6971 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9897> 2026-01-22T15:31:56.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6971) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:55.440542+0000 osd.2 (osd.2) 6971 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:27.320718+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6972 sent 6971 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:56.416245+0000 osd.2 (osd.2) 6972 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9886> 2026-01-22T15:31:57.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6972) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:56.416245+0000 osd.2 (osd.2) 6972 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:28.320953+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6973 sent 6972 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:57.444103+0000 osd.2 (osd.2) 6973 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9872> 2026-01-22T15:31:58.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6973) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:57.444103+0000 osd.2 (osd.2) 6973 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,5,1,0,16,34,32,65,25])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:29.321179+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6974 sent 6973 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:58.453505+0000 osd.2 (osd.2) 6974 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9860> 2026-01-22T15:31:59.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6974) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:58.453505+0000 osd.2 (osd.2) 6974 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:30.321420+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6975 sent 6974 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:31:59.449069+0000 osd.2 (osd.2) 6975 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9849> 2026-01-22T15:32:00.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,5,1,0,16,34,32,65,25])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6975) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:31:59.449069+0000 osd.2 (osd.2) 6975 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:31.321650+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6976 sent 6975 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:00.439753+0000 osd.2 (osd.2) 6976 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9837> 2026-01-22T15:32:01.464+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6976) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:00.439753+0000 osd.2 (osd.2) 6976 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:32.321899+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6977 sent 6976 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:01.466074+0000 osd.2 (osd.2) 6977 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9826> 2026-01-22T15:32:02.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:33.322152+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6978 sent 6977 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:02.503060+0000 osd.2 (osd.2) 6978 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9814> 2026-01-22T15:32:03.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6977) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:01.466074+0000 osd.2 (osd.2) 6977 : cluster [WRN] 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:34.322403+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6979 sent 6978 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:03.461888+0000 osd.2 (osd.2) 6979 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9803> 2026-01-22T15:32:04.412+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6978) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:02.503060+0000 osd.2 (osd.2) 6978 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6979) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:03.461888+0000 osd.2 (osd.2) 6979 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:35.322610+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6980 sent 6979 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:04.413294+0000 osd.2 (osd.2) 6980 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9790> 2026-01-22T15:32:05.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6980) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:04.413294+0000 osd.2 (osd.2) 6980 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:36.322818+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6981 sent 6980 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:05.431371+0000 osd.2 (osd.2) 6981 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,8,2,0,16,34,32,65,25])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9778> 2026-01-22T15:32:06.481+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:37.323198+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6982 sent 6981 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:06.481566+0000 osd.2 (osd.2) 6982 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6981) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:05.431371+0000 osd.2 (osd.2) 6981 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9767> 2026-01-22T15:32:07.437+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,5,5,0,16,34,32,65,25])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6982) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:06.481566+0000 osd.2 (osd.2) 6982 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:38.323424+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6983 sent 6982 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:07.438273+0000 osd.2 (osd.2) 6983 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9752> 2026-01-22T15:32:08.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6983) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:07.438273+0000 osd.2 (osd.2) 6983 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:39.323637+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6984 sent 6983 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:08.393516+0000 osd.2 (osd.2) 6984 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9742> 2026-01-22T15:32:09.344+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,65,25])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:40.323859+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6985 sent 6984 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:09.344842+0000 osd.2 (osd.2) 6985 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9732> 2026-01-22T15:32:10.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6984) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:08.393516+0000 osd.2 (osd.2) 6984 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:41.324044+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6986 sent 6985 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:10.389277+0000 osd.2 (osd.2) 6986 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9721> 2026-01-22T15:32:11.382+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6985) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:09.344842+0000 osd.2 (osd.2) 6985 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6986) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:10.389277+0000 osd.2 (osd.2) 6986 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:42.324242+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6987 sent 6986 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:11.383173+0000 osd.2 (osd.2) 6987 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9708> 2026-01-22T15:32:12.334+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,64,26])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6987) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:11.383173+0000 osd.2 (osd.2) 6987 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:43.324493+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6988 sent 6987 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:12.334453+0000 osd.2 (osd.2) 6988 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9693> 2026-01-22T15:32:13.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:44.324735+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6989 sent 6988 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:13.363674+0000 osd.2 (osd.2) 6989 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9684> 2026-01-22T15:32:14.390+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6988) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:12.334453+0000 osd.2 (osd.2) 6988 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6989) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:13.363674+0000 osd.2 (osd.2) 6989 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,64,26])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:45.325201+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6990 sent 6989 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:14.391393+0000 osd.2 (osd.2) 6990 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9670> 2026-01-22T15:32:15.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6990) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:14.391393+0000 osd.2 (osd.2) 6990 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:46.325399+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6991 sent 6990 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:15.389280+0000 osd.2 (osd.2) 6991 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9659> 2026-01-22T15:32:16.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6991) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:15.389280+0000 osd.2 (osd.2) 6991 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:47.325567+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6992 sent 6991 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:16.384643+0000 osd.2 (osd.2) 6992 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9648> 2026-01-22T15:32:17.404+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:48.325759+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6993 sent 6992 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:17.406367+0000 osd.2 (osd.2) 6993 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9635> 2026-01-22T15:32:18.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6992) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:16.384643+0000 osd.2 (osd.2) 6992 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:49.326010+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 6994 sent 6993 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:18.455326+0000 osd.2 (osd.2) 6994 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9624> 2026-01-22T15:32:19.489+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:50.326211+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 6995 sent 6994 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:19.491455+0000 osd.2 (osd.2) 6995 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,64,26])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9613> 2026-01-22T15:32:20.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,61,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:51.326436+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 4 last_log 6996 sent 6995 num 4 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:20.472295+0000 osd.2 (osd.2) 6996 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6993) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:17.406367+0000 osd.2 (osd.2) 6993 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6994) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:18.455326+0000 osd.2 (osd.2) 6994 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9600> 2026-01-22T15:32:21.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:52.326639+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 6997 sent 6996 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:21.426053+0000 osd.2 (osd.2) 6997 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9591> 2026-01-22T15:32:22.401+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6995) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:19.491455+0000 osd.2 (osd.2) 6995 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:53.326865+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 6998 sent 6997 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:22.403092+0000 osd.2 (osd.2) 6998 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9576> 2026-01-22T15:32:23.441+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6996) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:20.472295+0000 osd.2 (osd.2) 6996 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6997) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:21.426053+0000 osd.2 (osd.2) 6997 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6998) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:22.403092+0000 osd.2 (osd.2) 6998 : cluster [WRN] 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-22T15:31:54.327091+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 6999 sent 6998 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:23.442395+0000 osd.2 (osd.2) 6999 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _finish_auth 0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:54.327863+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9557> 2026-01-22T15:32:24.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 6999) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:23.442395+0000 osd.2 (osd.2) 6999 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:55.327343+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7000 sent 6999 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:24.491247+0000 osd.2 (osd.2) 7000 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9546> 2026-01-22T15:32:25.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,1,16,34,31,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:56.327496+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7001 sent 7000 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:25.502822+0000 osd.2 (osd.2) 7001 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9536> 2026-01-22T15:32:26.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:57.327654+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7002 sent 7001 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:26.517147+0000 osd.2 (osd.2) 7002 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9527> 2026-01-22T15:32:27.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7000) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:24.491247+0000 osd.2 (osd.2) 7000 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:58.327830+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7003 sent 7002 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:27.553877+0000 osd.2 (osd.2) 7003 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9513> 2026-01-22T15:32:28.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:31:59.328083+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 4 last_log 7004 sent 7003 num 4 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:28.580143+0000 osd.2 (osd.2) 7004 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9504> 2026-01-22T15:32:29.540+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,1,16,34,31,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7001) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:25.502822+0000 osd.2 (osd.2) 7001 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7002) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:26.517147+0000 osd.2 (osd.2) 7002 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7003) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:27.553877+0000 osd.2 (osd.2) 7003 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:00.328346+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7005 sent 7004 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:29.542135+0000 osd.2 (osd.2) 7005 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9488> 2026-01-22T15:32:30.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7004) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:28.580143+0000 osd.2 (osd.2) 7004 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7005) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:29.542135+0000 osd.2 (osd.2) 7005 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:01.328532+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7006 sent 7005 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:30.584650+0000 osd.2 (osd.2) 7006 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9475> 2026-01-22T15:32:31.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:02.328745+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7007 sent 7006 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:31.615906+0000 osd.2 (osd.2) 7007 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7006) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:30.584650+0000 osd.2 (osd.2) 7006 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9464> 2026-01-22T15:32:32.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:03.328999+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7008 sent 7007 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:32.572377+0000 osd.2 (osd.2) 7008 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7007) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:31.615906+0000 osd.2 (osd.2) 7007 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7008) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:32.572377+0000 osd.2 (osd.2) 7008 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9448> 2026-01-22T15:32:33.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,5,1,16,34,31,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:04.329196+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7009 sent 7008 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:33.541427+0000 osd.2 (osd.2) 7009 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7009) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:33.541427+0000 osd.2 (osd.2) 7009 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9436> 2026-01-22T15:32:34.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:05.329397+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7010 sent 7009 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:34.555889+0000 osd.2 (osd.2) 7010 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7010) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:34.555889+0000 osd.2 (osd.2) 7010 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9425> 2026-01-22T15:32:35.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:06.329573+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7011 sent 7010 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:35.554794+0000 osd.2 (osd.2) 7011 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7011) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:35.554794+0000 osd.2 (osd.2) 7011 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9414> 2026-01-22T15:32:36.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:07.329768+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7012 sent 7011 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:36.559003+0000 osd.2 (osd.2) 7012 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7012) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:36.559003+0000 osd.2 (osd.2) 7012 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9403> 2026-01-22T15:32:37.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,34,31,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:08.329947+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7013 sent 7012 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:37.547197+0000 osd.2 (osd.2) 7013 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9390> 2026-01-22T15:32:38.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7013) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:37.547197+0000 osd.2 (osd.2) 7013 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:09.330120+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7014 sent 7013 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:38.561134+0000 osd.2 (osd.2) 7014 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9379> 2026-01-22T15:32:39.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7014) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:38.561134+0000 osd.2 (osd.2) 7014 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:10.330306+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7015 sent 7014 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:39.606599+0000 osd.2 (osd.2) 7015 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9368> 2026-01-22T15:32:40.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:11.330546+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7016 sent 7015 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:40.598514+0000 osd.2 (osd.2) 7016 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7015) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:39.606599+0000 osd.2 (osd.2) 7015 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9357> 2026-01-22T15:32:41.549+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:12.330723+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7017 sent 7016 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:41.550375+0000 osd.2 (osd.2) 7017 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7016) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:40.598514+0000 osd.2 (osd.2) 7016 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7017) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:41.550375+0000 osd.2 (osd.2) 7017 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9343> 2026-01-22T15:32:42.562+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:13.331033+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7018 sent 7017 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:42.563339+0000 osd.2 (osd.2) 7018 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9331> 2026-01-22T15:32:43.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7018) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:42.563339+0000 osd.2 (osd.2) 7018 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:14.331383+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7019 sent 7018 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:43.526535+0000 osd.2 (osd.2) 7019 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9320> 2026-01-22T15:32:44.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7019) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:43.526535+0000 osd.2 (osd.2) 7019 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:15.331674+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7020 sent 7019 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:44.566510+0000 osd.2 (osd.2) 7020 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9309> 2026-01-22T15:32:45.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7020) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:44.566510+0000 osd.2 (osd.2) 7020 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735c7cc000 session 0x55735a5254a0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5fac00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:16.331894+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7021 sent 7020 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:45.614049+0000 osd.2 (osd.2) 7021 : cluster [WRN] 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9296> 2026-01-22T15:32:46.621+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:17.332135+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7022 sent 7021 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:46.622062+0000 osd.2 (osd.2) 7022 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7021) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:45.614049+0000 osd.2 (osd.2) 7021 : cluster [WRN] 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9284> 2026-01-22T15:32:47.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:18.332360+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7023 sent 7022 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:47.662368+0000 osd.2 (osd.2) 7023 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9271> 2026-01-22T15:32:48.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7022) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:46.622062+0000 osd.2 (osd.2) 7022 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7023) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:47.662368+0000 osd.2 (osd.2) 7023 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:19.332530+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7024 sent 7023 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:48.683008+0000 osd.2 (osd.2) 7024 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9258> 2026-01-22T15:32:49.719+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:20.332725+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7025 sent 7024 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:49.719908+0000 osd.2 (osd.2) 7025 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9248> 2026-01-22T15:32:50.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7024) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:48.683008+0000 osd.2 (osd.2) 7024 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:21.332882+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7026 sent 7025 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:50.678615+0000 osd.2 (osd.2) 7026 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735b583c00 session 0x55735c5e83c0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735bf09800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9235> 2026-01-22T15:32:51.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7025) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:49.719908+0000 osd.2 (osd.2) 7025 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:22.333156+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7027 sent 7026 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:51.653618+0000 osd.2 (osd.2) 7027 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9223> 2026-01-22T15:32:52.637+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:23.333419+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7028 sent 7027 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:52.639208+0000 osd.2 (osd.2) 7028 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9211> 2026-01-22T15:32:53.645+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7026) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:50.678615+0000 osd.2 (osd.2) 7026 : cluster [WRN] 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7027) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:51.653618+0000 osd.2 (osd.2) 7027 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:24.333598+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7029 sent 7028 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:53.647165+0000 osd.2 (osd.2) 7029 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9198> 2026-01-22T15:32:54.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7028) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:52.639208+0000 osd.2 (osd.2) 7028 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7029) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:53.647165+0000 osd.2 (osd.2) 7029 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:25.333800+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7030 sent 7029 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:54.630055+0000 osd.2 (osd.2) 7030 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9184> 2026-01-22T15:32:55.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7030) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:54.630055+0000 osd.2 (osd.2) 7030 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:26.334056+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7031 sent 7030 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:55.680071+0000 osd.2 (osd.2) 7031 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9173> 2026-01-22T15:32:56.722+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7031) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:55.680071+0000 osd.2 (osd.2) 7031 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:27.334301+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7032 sent 7031 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:56.723741+0000 osd.2 (osd.2) 7032 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9162> 2026-01-22T15:32:57.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7032) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:56.723741+0000 osd.2 (osd.2) 7032 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:28.334559+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7033 sent 7032 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:57.700379+0000 osd.2 (osd.2) 7033 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9148> 2026-01-22T15:32:58.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7033) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:57.700379+0000 osd.2 (osd.2) 7033 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:29.334750+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7034 sent 7033 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:58.748421+0000 osd.2 (osd.2) 7034 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9137> 2026-01-22T15:32:59.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:30.334972+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7035 sent 7034 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:32:59.768551+0000 osd.2 (osd.2) 7035 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7034) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:58.748421+0000 osd.2 (osd.2) 7034 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9125> 2026-01-22T15:33:00.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:31.335140+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7036 sent 7035 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:00.777346+0000 osd.2 (osd.2) 7036 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7035) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:32:59.768551+0000 osd.2 (osd.2) 7035 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7036) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:00.777346+0000 osd.2 (osd.2) 7036 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9111> 2026-01-22T15:33:01.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:32.335391+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7037 sent 7036 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:01.732715+0000 osd.2 (osd.2) 7037 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9102> 2026-01-22T15:33:02.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:33.335625+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7038 sent 7037 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:02.706840+0000 osd.2 (osd.2) 7038 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7037) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:01.732715+0000 osd.2 (osd.2) 7037 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9088> 2026-01-22T15:33:03.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:34.335828+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7039 sent 7038 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:03.705470+0000 osd.2 (osd.2) 7039 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7038) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:02.706840+0000 osd.2 (osd.2) 7038 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7039) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:03.705470+0000 osd.2 (osd.2) 7039 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9074> 2026-01-22T15:33:04.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:35.336084+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7040 sent 7039 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:04.749155+0000 osd.2 (osd.2) 7040 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9065> 2026-01-22T15:33:05.742+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7040) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:04.749155+0000 osd.2 (osd.2) 7040 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:36.336295+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7041 sent 7040 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:05.743406+0000 osd.2 (osd.2) 7041 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9053> 2026-01-22T15:33:06.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:37.336498+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7042 sent 7041 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:06.777435+0000 osd.2 (osd.2) 7042 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9044> 2026-01-22T15:33:07.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,10,1,12,37,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:38.336664+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7043 sent 7042 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:07.776945+0000 osd.2 (osd.2) 7043 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7041) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:05.743406+0000 osd.2 (osd.2) 7041 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9029> 2026-01-22T15:33:08.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:39.336884+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7044 sent 7043 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:08.791074+0000 osd.2 (osd.2) 7044 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9020> 2026-01-22T15:33:09.827+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,9,2,11,38,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:40.337262+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 4 last_log 7045 sent 7044 num 4 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:09.828604+0000 osd.2 (osd.2) 7045 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7042) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:06.777435+0000 osd.2 (osd.2) 7042 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7043) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:07.776945+0000 osd.2 (osd.2) 7043 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -9006> 2026-01-22T15:33:10.852+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7044) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:08.791074+0000 osd.2 (osd.2) 7044 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7045) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:09.828604+0000 osd.2 (osd.2) 7045 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:41.337577+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7046 sent 7045 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:10.853473+0000 osd.2 (osd.2) 7046 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8993> 2026-01-22T15:33:11.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:42.337792+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7047 sent 7046 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:11.850819+0000 osd.2 (osd.2) 7047 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7046) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:10.853473+0000 osd.2 (osd.2) 7046 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8979> 2026-01-22T15:33:12.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:43.338044+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7048 sent 7047 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:12.896733+0000 osd.2 (osd.2) 7048 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7047) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:11.850819+0000 osd.2 (osd.2) 7047 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7048) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:12.896733+0000 osd.2 (osd.2) 7048 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8966> 2026-01-22T15:33:13.906+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:44.338432+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7049 sent 7048 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:13.907687+0000 osd.2 (osd.2) 7049 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,5,11,38,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7049) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:13.907687+0000 osd.2 (osd.2) 7049 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8954> 2026-01-22T15:33:14.918+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:45.338696+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7050 sent 7049 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:14.919834+0000 osd.2 (osd.2) 7050 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,5,11,38,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7050) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:14.919834+0000 osd.2 (osd.2) 7050 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8942> 2026-01-22T15:33:15.963+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:46.338935+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7051 sent 7050 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:15.964544+0000 osd.2 (osd.2) 7051 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7051) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:15.964544+0000 osd.2 (osd.2) 7051 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8931> 2026-01-22T15:33:16.924+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:47.339191+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7052 sent 7051 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:16.926234+0000 osd.2 (osd.2) 7052 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7052) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:16.926234+0000 osd.2 (osd.2) 7052 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8917> 2026-01-22T15:33:17.931+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:48.339372+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7053 sent 7052 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:17.932194+0000 osd.2 (osd.2) 7053 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7053) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:17.932194+0000 osd.2 (osd.2) 7053 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8906> 2026-01-22T15:33:18.949+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:49.339582+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7054 sent 7053 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:18.949747+0000 osd.2 (osd.2) 7054 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8897> 2026-01-22T15:33:19.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7054) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:18.949747+0000 osd.2 (osd.2) 7054 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:50.339781+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7055 sent 7054 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:19.956990+0000 osd.2 (osd.2) 7055 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8886> 2026-01-22T15:33:21.006+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:51.339965+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7056 sent 7055 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:21.007173+0000 osd.2 (osd.2) 7056 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,5,11,38,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7055) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:19.956990+0000 osd.2 (osd.2) 7055 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8874> 2026-01-22T15:33:21.988+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:52.340208+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7057 sent 7056 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:21.988688+0000 osd.2 (osd.2) 7057 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8862> 2026-01-22T15:33:23.002+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:53.340764+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7058 sent 7057 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:23.002502+0000 osd.2 (osd.2) 7058 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8853> 2026-01-22T15:33:24.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:54.340989+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 4 last_log 7059 sent 7058 num 4 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:24.006354+0000 osd.2 (osd.2) 7059 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8844> 2026-01-22T15:33:25.011+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:55.341204+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 5 last_log 7060 sent 7059 num 5 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:25.011567+0000 osd.2 (osd.2) 7060 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7056) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:21.007173+0000 osd.2 (osd.2) 7056 : cluster [WRN] 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7057) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:21.988688+0000 osd.2 (osd.2) 7057 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,41,32,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8830> 2026-01-22T15:33:26.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:56.341591+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 4 last_log 7061 sent 7060 num 4 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:26.040248+0000 osd.2 (osd.2) 7061 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7058) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:23.002502+0000 osd.2 (osd.2) 7058 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7059) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:24.006354+0000 osd.2 (osd.2) 7059 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8817> 2026-01-22T15:33:27.028+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:57.341868+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7062 sent 7061 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:27.029124+0000 osd.2 (osd.2) 7062 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8805> 2026-01-22T15:33:28.068+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7060) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:25.011567+0000 osd.2 (osd.2) 7060 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7061) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:26.040248+0000 osd.2 (osd.2) 7061 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7062) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:27.029124+0000 osd.2 (osd.2) 7062 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:58.342112+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7063 sent 7062 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:28.069176+0000 osd.2 (osd.2) 7063 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,40,33,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8789> 2026-01-22T15:33:29.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:32:59.342366+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7064 sent 7063 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:29.091469+0000 osd.2 (osd.2) 7064 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7063) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:28.069176+0000 osd.2 (osd.2) 7063 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8778> 2026-01-22T15:33:30.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:00.342631+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7065 sent 7064 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:30.080813+0000 osd.2 (osd.2) 7065 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7064) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:29.091469+0000 osd.2 (osd.2) 7064 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7065) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:30.080813+0000 osd.2 (osd.2) 7065 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8765> 2026-01-22T15:33:31.092+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:01.342847+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7066 sent 7065 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:31.093965+0000 osd.2 (osd.2) 7066 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7066) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:31.093965+0000 osd.2 (osd.2) 7066 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8753> 2026-01-22T15:33:32.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:02.343042+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7067 sent 7066 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:32.112877+0000 osd.2 (osd.2) 7067 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7067) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:32.112877+0000 osd.2 (osd.2) 7067 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8739> 2026-01-22T15:33:33.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:03.343293+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7068 sent 7067 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:33.139979+0000 osd.2 (osd.2) 7068 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8730> 2026-01-22T15:33:34.094+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7068) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:33.139979+0000 osd.2 (osd.2) 7068 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:04.343573+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7069 sent 7068 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:34.096066+0000 osd.2 (osd.2) 7069 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8719> 2026-01-22T15:33:35.069+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7069) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:34.096066+0000 osd.2 (osd.2) 7069 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:05.343757+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7070 sent 7069 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:35.070094+0000 osd.2 (osd.2) 7070 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8708> 2026-01-22T15:33:36.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7070) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:35.070094+0000 osd.2 (osd.2) 7070 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:06.344208+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7071 sent 7070 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:36.040997+0000 osd.2 (osd.2) 7071 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8696> 2026-01-22T15:33:37.048+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:07.344536+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7072 sent 7071 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:37.049360+0000 osd.2 (osd.2) 7072 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7071) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:36.040997+0000 osd.2 (osd.2) 7071 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,6,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8681> 2026-01-22T15:33:38.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:08.344720+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7073 sent 7072 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:38.030543+0000 osd.2 (osd.2) 7073 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7072) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:37.049360+0000 osd.2 (osd.2) 7072 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7073) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:38.030543+0000 osd.2 (osd.2) 7073 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8668> 2026-01-22T15:33:39.077+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:09.345293+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7074 sent 7073 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:39.079204+0000 osd.2 (osd.2) 7074 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7074) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:39.079204+0000 osd.2 (osd.2) 7074 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8657> 2026-01-22T15:33:40.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:10.345591+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7075 sent 7074 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:40.104689+0000 osd.2 (osd.2) 7075 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7075) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:40.104689+0000 osd.2 (osd.2) 7075 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8646> 2026-01-22T15:33:41.134+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:11.346885+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7076 sent 7075 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:41.136103+0000 osd.2 (osd.2) 7076 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7076) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:41.136103+0000 osd.2 (osd.2) 7076 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8635> 2026-01-22T15:33:42.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:12.347734+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7077 sent 7076 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:42.122719+0000 osd.2 (osd.2) 7077 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8622> 2026-01-22T15:33:43.082+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:13.347952+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7078 sent 7077 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:43.083409+0000 osd.2 (osd.2) 7078 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7077) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:42.122719+0000 osd.2 (osd.2) 7077 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8611> 2026-01-22T15:33:44.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:14.348643+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7079 sent 7078 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:44.081110+0000 osd.2 (osd.2) 7079 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7078) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:43.083409+0000 osd.2 (osd.2) 7078 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7079) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:44.081110+0000 osd.2 (osd.2) 7079 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8598> 2026-01-22T15:33:45.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:15.349727+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7080 sent 7079 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:45.104671+0000 osd.2 (osd.2) 7080 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7080) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:45.104671+0000 osd.2 (osd.2) 7080 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8586> 2026-01-22T15:33:46.135+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:16.350347+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7081 sent 7080 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:46.136978+0000 osd.2 (osd.2) 7081 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7081) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:46.136978+0000 osd.2 (osd.2) 7081 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8575> 2026-01-22T15:33:47.174+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:17.351384+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7082 sent 7081 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:47.176121+0000 osd.2 (osd.2) 7082 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8563> 2026-01-22T15:33:48.127+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:18.351955+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7083 sent 7082 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:48.128800+0000 osd.2 (osd.2) 7083 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7082) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:47.176121+0000 osd.2 (osd.2) 7082 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8552> 2026-01-22T15:33:49.153+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:19.352762+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7084 sent 7083 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:49.154907+0000 osd.2 (osd.2) 7084 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7083) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:48.128800+0000 osd.2 (osd.2) 7083 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8541> 2026-01-22T15:33:50.126+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:20.353009+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7085 sent 7084 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:50.127916+0000 osd.2 (osd.2) 7085 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7084) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:49.154907+0000 osd.2 (osd.2) 7084 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8530> 2026-01-22T15:33:51.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:21.353598+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7086 sent 7085 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:51.112612+0000 osd.2 (osd.2) 7086 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7085) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:50.127916+0000 osd.2 (osd.2) 7085 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7086) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:51.112612+0000 osd.2 (osd.2) 7086 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8516> 2026-01-22T15:33:52.066+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:22.354052+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7087 sent 7086 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:52.068094+0000 osd.2 (osd.2) 7087 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7087) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:52.068094+0000 osd.2 (osd.2) 7087 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8502> 2026-01-22T15:33:53.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:23.354251+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7088 sent 7087 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:53.040859+0000 osd.2 (osd.2) 7088 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8493> 2026-01-22T15:33:54.080+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:24.354479+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7089 sent 7088 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:54.081802+0000 osd.2 (osd.2) 7089 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7088) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:53.040859+0000 osd.2 (osd.2) 7088 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8482> 2026-01-22T15:33:55.091+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:25.354741+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7090 sent 7089 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:55.092331+0000 osd.2 (osd.2) 7090 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7089) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:54.081802+0000 osd.2 (osd.2) 7089 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7090) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:55.092331+0000 osd.2 (osd.2) 7090 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8468> 2026-01-22T15:33:56.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:26.355085+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7091 sent 7090 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:56.100797+0000 osd.2 (osd.2) 7091 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8459> 2026-01-22T15:33:57.101+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:27.355579+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7092 sent 7091 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:57.101864+0000 osd.2 (osd.2) 7092 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7091) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:56.100797+0000 osd.2 (osd.2) 7091 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8445> 2026-01-22T15:33:58.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:28.355819+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7093 sent 7092 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:58.121504+0000 osd.2 (osd.2) 7093 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7092) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:57.101864+0000 osd.2 (osd.2) 7092 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8434> 2026-01-22T15:33:59.083+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:29.356023+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7094 sent 7093 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:33:59.084220+0000 osd.2 (osd.2) 7094 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7093) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:58.121504+0000 osd.2 (osd.2) 7093 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7094) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:33:59.084220+0000 osd.2 (osd.2) 7094 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8421> 2026-01-22T15:34:00.047+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:30.356203+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7095 sent 7094 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:00.047386+0000 osd.2 (osd.2) 7095 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8412> 2026-01-22T15:34:01.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,10,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:31.356443+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7096 sent 7095 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:01.091181+0000 osd.2 (osd.2) 7096 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7095) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:00.047386+0000 osd.2 (osd.2) 7095 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8400> 2026-01-22T15:34:02.060+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:32.356677+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7097 sent 7096 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:02.061282+0000 osd.2 (osd.2) 7097 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7096) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:01.091181+0000 osd.2 (osd.2) 7096 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7097) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:02.061282+0000 osd.2 (osd.2) 7097 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8384> 2026-01-22T15:34:03.093+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:33.357030+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7098 sent 7097 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:03.093394+0000 osd.2 (osd.2) 7098 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8375> 2026-01-22T15:34:04.057+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7098) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:03.093394+0000 osd.2 (osd.2) 7098 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:34.357381+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7099 sent 7098 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:04.057940+0000 osd.2 (osd.2) 7099 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8363> 2026-01-22T15:34:05.084+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:35.357615+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7100 sent 7099 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:05.084945+0000 osd.2 (osd.2) 7100 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7099) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:04.057940+0000 osd.2 (osd.2) 7099 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8352> 2026-01-22T15:34:06.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:36.357874+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7101 sent 7100 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:06.037222+0000 osd.2 (osd.2) 7101 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8343> 2026-01-22T15:34:07.027+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:37.358148+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7102 sent 7101 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:07.028981+0000 osd.2 (osd.2) 7102 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8331> 2026-01-22T15:34:08.067+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7100) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:05.084945+0000 osd.2 (osd.2) 7100 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7101) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:06.037222+0000 osd.2 (osd.2) 7101 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:38.358377+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7103 sent 7102 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:08.068937+0000 osd.2 (osd.2) 7103 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7102) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:07.028981+0000 osd.2 (osd.2) 7102 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7103) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:08.068937+0000 osd.2 (osd.2) 7103 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8313> 2026-01-22T15:34:09.050+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:39.358667+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7104 sent 7103 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:09.051540+0000 osd.2 (osd.2) 7104 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8304> 2026-01-22T15:34:10.008+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:40.358948+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7105 sent 7104 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:10.010204+0000 osd.2 (osd.2) 7105 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7104) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:09.051540+0000 osd.2 (osd.2) 7104 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8293> 2026-01-22T15:34:11.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:41.359171+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7106 sent 7105 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:11.002727+0000 osd.2 (osd.2) 7106 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8285> 2026-01-22T15:34:11.958+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7105) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:10.010204+0000 osd.2 (osd.2) 7105 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:42.359382+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7107 sent 7106 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:11.960143+0000 osd.2 (osd.2) 7107 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8273> 2026-01-22T15:34:12.913+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7106) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:11.002727+0000 osd.2 (osd.2) 7106 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7107) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:11.960143+0000 osd.2 (osd.2) 7107 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:43.359558+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7108 sent 7107 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:12.914962+0000 osd.2 (osd.2) 7108 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735dbcc400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 432.608551025s of 433.505828857s, submitted: 246
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8255> 2026-01-22T15:34:13.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735dbcc400 session 0x55735b58e960
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c7ce800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:44.359759+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7109 sent 7108 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:13.880959+0000 osd.2 (osd.2) 7109 : cluster [WRN] 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7108) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:12.914962+0000 osd.2 (osd.2) 7108 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735c7ce800 session 0x55735d390960
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8240> 2026-01-22T15:34:14.863+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206757888 unmapped: 2547712 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:45.360018+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7110 sent 7109 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:14.864668+0000 osd.2 (osd.2) 7110 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8230> 2026-01-22T15:34:15.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7109) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:13.880959+0000 osd.2 (osd.2) 7109 : cluster [WRN] 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7110) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:14.864668+0000 osd.2 (osd.2) 7110 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 2514944 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735bf03800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735dbcd400 session 0x55735d0385a0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x557359ea8800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735bf03800 session 0x55735ceab0e0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735c639400 session 0x55735cbea000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735a80a000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:46.360219+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7111 sent 7110 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:15.896439+0000 osd.2 (osd.2) 7111 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8210> 2026-01-22T15:34:16.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 2514944 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:47.360452+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7112 sent 7111 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:16.957538+0000 osd.2 (osd.2) 7112 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7111) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:15.896439+0000 osd.2 (osd.2) 7111 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8199> 2026-01-22T15:34:17.969+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 2514944 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7112) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:16.957538+0000 osd.2 (osd.2) 7112 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2712927 data_alloc: 218103808 data_used: 13565952
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:48.360705+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7113 sent 7112 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:17.971136+0000 osd.2 (osd.2) 7113 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c639400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c639400 session 0x55735b035860
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c7ce800
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8181> 2026-01-22T15:34:18.986+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206798848 unmapped: 2506752 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7113) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:17.971136+0000 osd.2 (osd.2) 7113 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c7ce800 session 0x55735ceaa3c0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:49.360934+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7114 sent 7113 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:18.987740+0000 osd.2 (osd.2) 7114 : cluster [WRN] 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735dbcc400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 9428992 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8167> 2026-01-22T15:34:19.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735dbcc400 session 0x55735c6234a0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735cee7800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:50.361097+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7115 sent 7114 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:20.001033+0000 osd.2 (osd.2) 7115 : cluster [WRN] 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7114) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:18.987740+0000 osd.2 (osd.2) 7114 : cluster [WRN] 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735cee7800 session 0x55735a739860
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec6000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8153> 2026-01-22T15:34:20.958+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7115) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:20.001033+0000 osd.2 (osd.2) 7115 : cluster [WRN] 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:51.361270+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7116 sent 7115 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:20.959468+0000 osd.2 (osd.2) 7116 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec7000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8141> 2026-01-22T15:34:21.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:52.361468+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7117 sent 7116 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:21.992578+0000 osd.2 (osd.2) 7117 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7116) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:20.959468+0000 osd.2 (osd.2) 7116 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8129> 2026-01-22T15:34:23.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c353800 session 0x55735a22e000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735cee7800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759862 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:53.361658+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7118 sent 7117 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:23.028533+0000 osd.2 (osd.2) 7118 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7117) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:21.992578+0000 osd.2 (osd.2) 7117 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7118) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:23.028533+0000 osd.2 (osd.2) 7118 : cluster [WRN] 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8111> 2026-01-22T15:34:24.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec7000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:54.361876+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7119 sent 7118 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:24.048690+0000 osd.2 (osd.2) 7119 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7119) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:24.048690+0000 osd.2 (osd.2) 7119 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8099> 2026-01-22T15:34:25.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:55.362108+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7120 sent 7119 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:25.098498+0000 osd.2 (osd.2) 7120 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7120) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:25.098498+0000 osd.2 (osd.2) 7120 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8088> 2026-01-22T15:34:26.050+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec7000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:56.362366+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7121 sent 7120 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:26.051965+0000 osd.2 (osd.2) 7121 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7121) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:26.051965+0000 osd.2 (osd.2) 7121 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8077> 2026-01-22T15:34:27.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:57.362644+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7122 sent 7121 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:27.017516+0000 osd.2 (osd.2) 7122 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7122) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:27.017516+0000 osd.2 (osd.2) 7122 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8066> 2026-01-22T15:34:28.008+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759862 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:58.362978+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7123 sent 7122 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:28.009893+0000 osd.2 (osd.2) 7123 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735f233c00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.176485062s of 14.780130386s, submitted: 54
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735f233c00 session 0x55735ce2f4a0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735bf08800
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8049> 2026-01-22T15:34:29.056+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:33:59.363449+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7124 sent 7123 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:29.057454+0000 osd.2 (osd.2) 7124 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7123) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:28.009893+0000 osd.2 (osd.2) 7123 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7124) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:29.057454+0000 osd.2 (osd.2) 7124 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8037> 2026-01-22T15:34:30.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:00.363987+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7125 sent 7124 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:30.018216+0000 osd.2 (osd.2) 7125 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8028> 2026-01-22T15:34:31.012+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:01.364235+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7126 sent 7125 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:31.013968+0000 osd.2 (osd.2) 7126 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8018> 2026-01-22T15:34:32.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec8000/0x0/0x1bfc00000, data 0xbcb6ab3/0xab96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,1,11,8,33,40,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:02.364490+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7127 sent 7126 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:32.043058+0000 osd.2 (osd.2) 7127 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -8008> 2026-01-22T15:34:33.046+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759365 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7125) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:30.018216+0000 osd.2 (osd.2) 7125 : cluster [WRN] 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:03.364709+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7128 sent 7127 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:33.047001+0000 osd.2 (osd.2) 7128 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7126) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:31.013968+0000 osd.2 (osd.2) 7126 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735bf08800 session 0x55735cea81e0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c352000 session 0x55735cea9680
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735ca61800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7989> 2026-01-22T15:34:34.081+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:04.364922+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7129 sent 7128 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:34.081520+0000 osd.2 (osd.2) 7129 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7127) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:32.043058+0000 osd.2 (osd.2) 7127 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7128) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:33.047001+0000 osd.2 (osd.2) 7128 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7976> 2026-01-22T15:34:35.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:05.365130+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7130 sent 7129 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:35.091831+0000 osd.2 (osd.2) 7130 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7129) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:34.081520+0000 osd.2 (osd.2) 7129 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7130) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:35.091831+0000 osd.2 (osd.2) 7130 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7963> 2026-01-22T15:34:36.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1,10,9,30,43,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:06.365394+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7131 sent 7130 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:36.136184+0000 osd.2 (osd.2) 7131 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7953> 2026-01-22T15:34:37.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:07.365723+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7132 sent 7131 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:37.110835+0000 osd.2 (osd.2) 7132 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7131) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:36.136184+0000 osd.2 (osd.2) 7131 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7942> 2026-01-22T15:34:38.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:08.366014+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7133 sent 7132 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:38.137162+0000 osd.2 (osd.2) 7133 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7132) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:37.110835+0000 osd.2 (osd.2) 7132 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7133) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:38.137162+0000 osd.2 (osd.2) 7133 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7926> 2026-01-22T15:34:39.127+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:09.366231+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7134 sent 7133 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:39.127926+0000 osd.2 (osd.2) 7134 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7917> 2026-01-22T15:34:40.152+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7134) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:39.127926+0000 osd.2 (osd.2) 7134 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,10,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:10.366459+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7135 sent 7134 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:40.152645+0000 osd.2 (osd.2) 7135 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7905> 2026-01-22T15:34:41.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7135) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:40.152645+0000 osd.2 (osd.2) 7135 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:11.366711+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7136 sent 7135 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:41.120910+0000 osd.2 (osd.2) 7136 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7894> 2026-01-22T15:34:42.112+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:12.366919+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7137 sent 7136 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:42.112747+0000 osd.2 (osd.2) 7137 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7136) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:41.120910+0000 osd.2 (osd.2) 7136 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7883> 2026-01-22T15:34:43.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,10,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:13.367098+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7138 sent 7137 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:43.136275+0000 osd.2 (osd.2) 7138 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7137) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:42.112747+0000 osd.2 (osd.2) 7137 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7868> 2026-01-22T15:34:44.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:14.367344+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7139 sent 7138 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:44.169801+0000 osd.2 (osd.2) 7139 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7138) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:43.136275+0000 osd.2 (osd.2) 7138 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7139) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:44.169801+0000 osd.2 (osd.2) 7139 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7855> 2026-01-22T15:34:45.138+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:15.367534+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7140 sent 7139 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:45.140085+0000 osd.2 (osd.2) 7140 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7140) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:45.140085+0000 osd.2 (osd.2) 7140 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,10,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7843> 2026-01-22T15:34:46.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:16.367753+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7141 sent 7140 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:46.097854+0000 osd.2 (osd.2) 7141 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7141) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:46.097854+0000 osd.2 (osd.2) 7141 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7832> 2026-01-22T15:34:47.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:17.367949+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7142 sent 7141 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:47.137686+0000 osd.2 (osd.2) 7142 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7142) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:47.137686+0000 osd.2 (osd.2) 7142 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7821> 2026-01-22T15:34:48.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:18.368153+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7143 sent 7142 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:48.158825+0000 osd.2 (osd.2) 7143 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7143) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:48.158825+0000 osd.2 (osd.2) 7143 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7807> 2026-01-22T15:34:49.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:19.368371+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7144 sent 7143 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:49.187428+0000 osd.2 (osd.2) 7144 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7144) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:49.187428+0000 osd.2 (osd.2) 7144 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7796> 2026-01-22T15:34:50.146+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:20.368881+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7145 sent 7144 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:50.147901+0000 osd.2 (osd.2) 7145 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,1,10,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7145) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:50.147901+0000 osd.2 (osd.2) 7145 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7784> 2026-01-22T15:34:51.123+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:21.369105+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7146 sent 7145 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:51.124576+0000 osd.2 (osd.2) 7146 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7146) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:51.124576+0000 osd.2 (osd.2) 7146 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7774> 2026-01-22T15:34:52.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:22.369346+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7147 sent 7146 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:52.092746+0000 osd.2 (osd.2) 7147 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7147) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:52.092746+0000 osd.2 (osd.2) 7147 : cluster [WRN] 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7763> 2026-01-22T15:34:53.078+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:23.369565+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7148 sent 7147 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:53.079505+0000 osd.2 (osd.2) 7148 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7751> 2026-01-22T15:34:54.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7148) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:53.079505+0000 osd.2 (osd.2) 7148 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:24.369754+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7149 sent 7148 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:54.068164+0000 osd.2 (osd.2) 7149 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,1,10,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7739> 2026-01-22T15:34:55.054+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:25.369913+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7150 sent 7149 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:55.055121+0000 osd.2 (osd.2) 7150 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7149) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:54.068164+0000 osd.2 (osd.2) 7149 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7150) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:55.055121+0000 osd.2 (osd.2) 7150 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7726> 2026-01-22T15:34:56.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c353400 session 0x55735c28ed20
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735f235400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:26.370088+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7151 sent 7150 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:56.034166+0000 osd.2 (osd.2) 7151 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7151) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:56.034166+0000 osd.2 (osd.2) 7151 : cluster [WRN] 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7713> 2026-01-22T15:34:57.052+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:27.370301+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7152 sent 7151 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:57.053041+0000 osd.2 (osd.2) 7152 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7152) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:57.053041+0000 osd.2 (osd.2) 7152 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7702> 2026-01-22T15:34:58.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:28.370521+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7153 sent 7152 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:58.006991+0000 osd.2 (osd.2) 7153 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7153) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:58.006991+0000 osd.2 (osd.2) 7153 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7688> 2026-01-22T15:34:59.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:29.370726+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7154 sent 7153 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:59.021030+0000 osd.2 (osd.2) 7154 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,10,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7154) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:59.021030+0000 osd.2 (osd.2) 7154 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7676> 2026-01-22T15:34:59.979+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:30.371078+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7155 sent 7154 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:34:59.980794+0000 osd.2 (osd.2) 7155 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7155) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:34:59.980794+0000 osd.2 (osd.2) 7155 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7665> 2026-01-22T15:35:00.974+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:31.371414+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7156 sent 7155 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:00.976288+0000 osd.2 (osd.2) 7156 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7156) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:00.976288+0000 osd.2 (osd.2) 7156 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7654> 2026-01-22T15:35:01.987+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:32.371639+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7157 sent 7156 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:01.989000+0000 osd.2 (osd.2) 7157 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7157) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:01.989000+0000 osd.2 (osd.2) 7157 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7643> 2026-01-22T15:35:03.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,11,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:33.371833+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7158 sent 7157 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:03.023405+0000 osd.2 (osd.2) 7158 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7630> 2026-01-22T15:35:04.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7158) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:03.023405+0000 osd.2 (osd.2) 7158 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:34.372074+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7159 sent 7158 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:04.073686+0000 osd.2 (osd.2) 7159 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7619> 2026-01-22T15:35:05.057+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7159) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:04.073686+0000 osd.2 (osd.2) 7159 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:35.372468+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7160 sent 7159 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:05.059127+0000 osd.2 (osd.2) 7160 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7608> 2026-01-22T15:35:06.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7160) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:05.059127+0000 osd.2 (osd.2) 7160 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:36.372759+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7161 sent 7160 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:06.099058+0000 osd.2 (osd.2) 7161 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7597> 2026-01-22T15:35:07.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,0,11,9,27,46,61,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:37.372940+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7162 sent 7161 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:07.110572+0000 osd.2 (osd.2) 7162 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7161) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:06.099058+0000 osd.2 (osd.2) 7161 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7585> 2026-01-22T15:35:08.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:38.373094+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7163 sent 7162 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:08.081207+0000 osd.2 (osd.2) 7163 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7162) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:07.110572+0000 osd.2 (osd.2) 7162 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7163) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:08.081207+0000 osd.2 (osd.2) 7163 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7569> 2026-01-22T15:35:09.084+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:39.373257+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7164 sent 7163 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:09.085700+0000 osd.2 (osd.2) 7164 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7164) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:09.085700+0000 osd.2 (osd.2) 7164 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7558> 2026-01-22T15:35:10.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:40.373625+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7165 sent 7164 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:10.110905+0000 osd.2 (osd.2) 7165 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7165) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:10.110905+0000 osd.2 (osd.2) 7165 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7547> 2026-01-22T15:35:11.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:41.373797+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7166 sent 7165 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:11.094501+0000 osd.2 (osd.2) 7166 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7166) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:11.094501+0000 osd.2 (osd.2) 7166 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7536> 2026-01-22T15:35:12.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:42.374020+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7167 sent 7166 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:12.120201+0000 osd.2 (osd.2) 7167 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7526> 2026-01-22T15:35:13.095+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7167) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:12.120201+0000 osd.2 (osd.2) 7167 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:43.374194+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7168 sent 7167 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:13.095644+0000 osd.2 (osd.2) 7168 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7512> 2026-01-22T15:35:14.107+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7168) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:13.095644+0000 osd.2 (osd.2) 7168 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:44.374464+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7169 sent 7168 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:14.107884+0000 osd.2 (osd.2) 7169 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7501> 2026-01-22T15:35:15.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:45.374710+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7170 sent 7169 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:15.131369+0000 osd.2 (osd.2) 7170 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7169) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:14.107884+0000 osd.2 (osd.2) 7169 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7489> 2026-01-22T15:35:16.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:46.374939+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7171 sent 7170 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:16.088646+0000 osd.2 (osd.2) 7171 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7480> 2026-01-22T15:35:17.133+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7170) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:15.131369+0000 osd.2 (osd.2) 7170 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7171) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:16.088646+0000 osd.2 (osd.2) 7171 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:47.375182+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7172 sent 7171 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:17.133599+0000 osd.2 (osd.2) 7172 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7467> 2026-01-22T15:35:18.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7172) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:17.133599+0000 osd.2 (osd.2) 7172 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:48.375388+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7173 sent 7172 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:18.115384+0000 osd.2 (osd.2) 7173 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7453> 2026-01-22T15:35:19.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7173) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:18.115384+0000 osd.2 (osd.2) 7173 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:49.375650+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7174 sent 7173 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:19.156240+0000 osd.2 (osd.2) 7174 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7442> 2026-01-22T15:35:20.160+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:50.375919+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7175 sent 7174 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:20.160669+0000 osd.2 (osd.2) 7175 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7432> 2026-01-22T15:35:21.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:51.376119+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7176 sent 7175 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:21.119768+0000 osd.2 (osd.2) 7176 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,10,10,27,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7174) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:19.156240+0000 osd.2 (osd.2) 7174 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7419> 2026-01-22T15:35:22.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7175) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:20.160669+0000 osd.2 (osd.2) 7175 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7176) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:21.119768+0000 osd.2 (osd.2) 7176 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:52.376325+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7177 sent 7176 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:22.123485+0000 osd.2 (osd.2) 7177 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7406> 2026-01-22T15:35:23.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7177) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:22.123485+0000 osd.2 (osd.2) 7177 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:53.376506+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7178 sent 7177 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:23.137737+0000 osd.2 (osd.2) 7178 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,10,5,32,45,62,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7391> 2026-01-22T15:35:24.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:54.376731+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7179 sent 7178 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:24.156267+0000 osd.2 (osd.2) 7179 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7178) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:23.137737+0000 osd.2 (osd.2) 7178 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7380> 2026-01-22T15:35:25.201+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:55.376938+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7180 sent 7179 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:25.202293+0000 osd.2 (osd.2) 7180 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7179) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:24.156267+0000 osd.2 (osd.2) 7179 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7180) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:25.202293+0000 osd.2 (osd.2) 7180 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7366> 2026-01-22T15:35:26.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:56.377116+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7181 sent 7180 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:26.226697+0000 osd.2 (osd.2) 7181 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7181) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:26.226697+0000 osd.2 (osd.2) 7181 : cluster [WRN] 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7355> 2026-01-22T15:35:27.259+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:57.377373+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7182 sent 7181 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:27.260921+0000 osd.2 (osd.2) 7182 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,7,8,32,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7182) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:27.260921+0000 osd.2 (osd.2) 7182 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7343> 2026-01-22T15:35:28.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:58.377579+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7183 sent 7182 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:28.293558+0000 osd.2 (osd.2) 7183 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7183) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:28.293558+0000 osd.2 (osd.2) 7183 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,8,32,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7328> 2026-01-22T15:35:29.317+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:34:59.377807+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7184 sent 7183 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:29.318587+0000 osd.2 (osd.2) 7184 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7184) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:29.318587+0000 osd.2 (osd.2) 7184 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7317> 2026-01-22T15:35:30.322+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:00.378028+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7185 sent 7184 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:30.324091+0000 osd.2 (osd.2) 7185 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7185) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:30.324091+0000 osd.2 (osd.2) 7185 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7306> 2026-01-22T15:35:31.281+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:01.378274+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7186 sent 7185 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:31.282914+0000 osd.2 (osd.2) 7186 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,8,32,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7186) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:31.282914+0000 osd.2 (osd.2) 7186 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7294> 2026-01-22T15:35:32.246+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:02.378599+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7187 sent 7186 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:32.247557+0000 osd.2 (osd.2) 7187 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,8,32,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7284> 2026-01-22T15:35:33.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:03.378829+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7188 sent 7187 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:33.250285+0000 osd.2 (osd.2) 7188 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7272> 2026-01-22T15:35:34.255+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,7,33,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:04.379105+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7189 sent 7188 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:34.257093+0000 osd.2 (osd.2) 7189 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7187) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:32.247557+0000 osd.2 (osd.2) 7187 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7260> 2026-01-22T15:35:35.283+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:05.379402+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7190 sent 7189 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:35.284421+0000 osd.2 (osd.2) 7190 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7188) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:33.250285+0000 osd.2 (osd.2) 7188 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7189) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:34.257093+0000 osd.2 (osd.2) 7189 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7190) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:35.284421+0000 osd.2 (osd.2) 7190 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7245> 2026-01-22T15:35:36.278+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:06.379603+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7191 sent 7190 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:36.280205+0000 osd.2 (osd.2) 7191 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7191) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:36.280205+0000 osd.2 (osd.2) 7191 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,7,33,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7233> 2026-01-22T15:35:37.298+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:07.379828+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7192 sent 7191 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:37.300017+0000 osd.2 (osd.2) 7192 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7192) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:37.300017+0000 osd.2 (osd.2) 7192 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7222> 2026-01-22T15:35:38.265+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:08.380015+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7193 sent 7192 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:38.267302+0000 osd.2 (osd.2) 7193 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7193) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:38.267302+0000 osd.2 (osd.2) 7193 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,7,33,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7207> 2026-01-22T15:35:39.273+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:09.380386+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7194 sent 7193 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:39.274882+0000 osd.2 (osd.2) 7194 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7194) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:39.274882+0000 osd.2 (osd.2) 7194 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,7,33,45,62,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7196> 2026-01-22T15:35:40.231+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:10.380836+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7195 sent 7194 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:40.233026+0000 osd.2 (osd.2) 7195 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7195) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:40.233026+0000 osd.2 (osd.2) 7195 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7185> 2026-01-22T15:35:41.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:11.381077+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7196 sent 7195 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:41.249567+0000 osd.2 (osd.2) 7196 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,7,33,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7174> 2026-01-22T15:35:42.280+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:12.381380+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7197 sent 7196 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:42.282357+0000 osd.2 (osd.2) 7197 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7196) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:41.249567+0000 osd.2 (osd.2) 7196 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7163> 2026-01-22T15:35:43.289+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:13.381588+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7198 sent 7197 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:43.290616+0000 osd.2 (osd.2) 7198 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7197) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:42.282357+0000 osd.2 (osd.2) 7197 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7198) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:43.290616+0000 osd.2 (osd.2) 7198 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7147> 2026-01-22T15:35:44.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:14.381881+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7199 sent 7198 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:44.294178+0000 osd.2 (osd.2) 7199 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,7,33,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7199) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:44.294178+0000 osd.2 (osd.2) 7199 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7136> 2026-01-22T15:35:45.256+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:15.382160+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7200 sent 7199 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:45.257712+0000 osd.2 (osd.2) 7200 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7200) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:45.257712+0000 osd.2 (osd.2) 7200 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7125> 2026-01-22T15:35:46.268+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,6,34,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:16.382388+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7201 sent 7200 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:46.269622+0000 osd.2 (osd.2) 7201 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7201) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:46.269622+0000 osd.2 (osd.2) 7201 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7113> 2026-01-22T15:35:47.247+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:17.382670+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7202 sent 7201 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:47.248993+0000 osd.2 (osd.2) 7202 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7202) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:47.248993+0000 osd.2 (osd.2) 7202 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7102> 2026-01-22T15:35:48.239+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:18.382885+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7203 sent 7202 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:48.240274+0000 osd.2 (osd.2) 7203 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,6,7,34,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7203) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:48.240274+0000 osd.2 (osd.2) 7203 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7087> 2026-01-22T15:35:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:19.383102+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7204 sent 7203 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:49.222716+0000 osd.2 (osd.2) 7204 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,6,7,34,45,62,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7204) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:49.222716+0000 osd.2 (osd.2) 7204 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7075> 2026-01-22T15:35:50.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:20.383434+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7205 sent 7204 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:50.186417+0000 osd.2 (osd.2) 7205 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7066> 2026-01-22T15:35:51.179+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7205) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:50.186417+0000 osd.2 (osd.2) 7205 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:21.383673+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7206 sent 7205 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:51.180288+0000 osd.2 (osd.2) 7206 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7055> 2026-01-22T15:35:52.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7206) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:51.180288+0000 osd.2 (osd.2) 7206 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:22.383898+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7207 sent 7206 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:52.186770+0000 osd.2 (osd.2) 7207 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7207) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:52.186770+0000 osd.2 (osd.2) 7207 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7042> 2026-01-22T15:35:53.235+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:23.384109+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7208 sent 7207 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:53.236209+0000 osd.2 (osd.2) 7208 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7030> 2026-01-22T15:35:54.240+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7208) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:53.236209+0000 osd.2 (osd.2) 7208 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:24.384339+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7209 sent 7208 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:54.240885+0000 osd.2 (osd.2) 7209 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7019> 2026-01-22T15:35:55.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7209) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:54.240885+0000 osd.2 (osd.2) 7209 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:25.384634+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7210 sent 7209 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:55.228932+0000 osd.2 (osd.2) 7210 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,11,34,45,62,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -7007> 2026-01-22T15:35:56.253+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:26.384934+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7211 sent 7210 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:56.254556+0000 osd.2 (osd.2) 7211 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735dbcf000 session 0x55735d045680
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c7cbc00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7210) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:55.228932+0000 osd.2 (osd.2) 7210 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6994> 2026-01-22T15:35:57.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:27.385122+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7212 sent 7211 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:57.229655+0000 osd.2 (osd.2) 7212 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7211) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:56.254556+0000 osd.2 (osd.2) 7211 : cluster [WRN] 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7212) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:57.229655+0000 osd.2 (osd.2) 7212 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6981> 2026-01-22T15:35:58.197+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:28.385400+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7213 sent 7212 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:58.198281+0000 osd.2 (osd.2) 7213 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7213) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:58.198281+0000 osd.2 (osd.2) 7213 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6967> 2026-01-22T15:35:59.219+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:29.385631+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7214 sent 7213 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:35:59.219651+0000 osd.2 (osd.2) 7214 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6958> 2026-01-22T15:36:00.177+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7214) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:35:59.219651+0000 osd.2 (osd.2) 7214 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,2,2,11,34,41,66,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:30.385883+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7215 sent 7214 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:00.178153+0000 osd.2 (osd.2) 7215 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6946> 2026-01-22T15:36:01.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:31.386062+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7216 sent 7215 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:01.187808+0000 osd.2 (osd.2) 7216 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7215) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:00.178153+0000 osd.2 (osd.2) 7215 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6935> 2026-01-22T15:36:02.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:32.386366+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7217 sent 7216 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:02.150541+0000 osd.2 (osd.2) 7217 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7216) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:01.187808+0000 osd.2 (osd.2) 7216 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7217) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:02.150541+0000 osd.2 (osd.2) 7217 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6922> 2026-01-22T15:36:03.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:33.386601+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7218 sent 7217 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:03.121518+0000 osd.2 (osd.2) 7218 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7218) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:03.121518+0000 osd.2 (osd.2) 7218 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6908> 2026-01-22T15:36:04.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:34.386913+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7219 sent 7218 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:04.103859+0000 osd.2 (osd.2) 7219 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,2,11,34,41,66,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7219) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:04.103859+0000 osd.2 (osd.2) 7219 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6896> 2026-01-22T15:36:05.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:35.387206+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7220 sent 7219 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:05.095992+0000 osd.2 (osd.2) 7220 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6887> 2026-01-22T15:36:06.058+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:36.387476+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7221 sent 7220 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:06.060033+0000 osd.2 (osd.2) 7221 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6878> 2026-01-22T15:36:07.105+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7220) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:05.095992+0000 osd.2 (osd.2) 7220 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7221) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:06.060033+0000 osd.2 (osd.2) 7221 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:37.387709+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7222 sent 7221 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:07.106369+0000 osd.2 (osd.2) 7222 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6865> 2026-01-22T15:36:08.065+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7222) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:07.106369+0000 osd.2 (osd.2) 7222 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:38.387873+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7223 sent 7222 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:08.066825+0000 osd.2 (osd.2) 7223 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,2,11,34,41,66,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6850> 2026-01-22T15:36:09.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7223) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:08.066825+0000 osd.2 (osd.2) 7223 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:39.388123+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7224 sent 7223 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:09.021131+0000 osd.2 (osd.2) 7224 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,2,11,34,41,66,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6838> 2026-01-22T15:36:10.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7224) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:09.021131+0000 osd.2 (osd.2) 7224 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:40.388452+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7225 sent 7224 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:10.049182+0000 osd.2 (osd.2) 7225 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6827> 2026-01-22T15:36:11.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,2,11,34,41,66,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:41.388708+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7226 sent 7225 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:11.010594+0000 osd.2 (osd.2) 7226 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7225) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:10.049182+0000 osd.2 (osd.2) 7225 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6815> 2026-01-22T15:36:12.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:42.388910+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7227 sent 7226 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:12.023686+0000 osd.2 (osd.2) 7227 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7226) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:11.010594+0000 osd.2 (osd.2) 7226 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7227) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:12.023686+0000 osd.2 (osd.2) 7227 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6802> 2026-01-22T15:36:13.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:43.389099+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7228 sent 7227 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:13.034774+0000 osd.2 (osd.2) 7228 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7228) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:13.034774+0000 osd.2 (osd.2) 7228 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6788> 2026-01-22T15:36:14.064+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:44.389453+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7229 sent 7228 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:14.065514+0000 osd.2 (osd.2) 7229 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7229) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:14.065514+0000 osd.2 (osd.2) 7229 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6777> 2026-01-22T15:36:15.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,41,66,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:45.389662+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7230 sent 7229 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:15.068914+0000 osd.2 (osd.2) 7230 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7230) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:15.068914+0000 osd.2 (osd.2) 7230 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6765> 2026-01-22T15:36:16.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:46.389850+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7231 sent 7230 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:16.074283+0000 osd.2 (osd.2) 7231 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7231) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:16.074283+0000 osd.2 (osd.2) 7231 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,41,66,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6753> 2026-01-22T15:36:17.092+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:47.390065+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7232 sent 7231 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:17.094079+0000 osd.2 (osd.2) 7232 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7232) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:17.094079+0000 osd.2 (osd.2) 7232 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6742> 2026-01-22T15:36:18.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:48.390305+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7233 sent 7232 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:18.110274+0000 osd.2 (osd.2) 7233 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7233) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:18.110274+0000 osd.2 (osd.2) 7233 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6728> 2026-01-22T15:36:19.153+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:49.390614+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7234 sent 7233 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:19.155135+0000 osd.2 (osd.2) 7234 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7234) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:19.155135+0000 osd.2 (osd.2) 7234 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6717> 2026-01-22T15:36:20.111+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:50.391047+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7235 sent 7234 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:20.113075+0000 osd.2 (osd.2) 7235 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7235) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:20.113075+0000 osd.2 (osd.2) 7235 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6706> 2026-01-22T15:36:21.117+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:51.391223+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7236 sent 7235 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:21.118626+0000 osd.2 (osd.2) 7236 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6697> 2026-01-22T15:36:22.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,41,66,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:52.391584+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7237 sent 7236 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:22.114411+0000 osd.2 (osd.2) 7237 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7236) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:21.118626+0000 osd.2 (osd.2) 7236 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6685> 2026-01-22T15:36:23.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:53.391796+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7238 sent 7237 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:23.133013+0000 osd.2 (osd.2) 7238 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7237) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:22.114411+0000 osd.2 (osd.2) 7237 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6671> 2026-01-22T15:36:24.161+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:54.391995+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7239 sent 7238 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:24.162997+0000 osd.2 (osd.2) 7239 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6662> 2026-01-22T15:36:25.121+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7238) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:23.133013+0000 osd.2 (osd.2) 7238 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7239) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:24.162997+0000 osd.2 (osd.2) 7239 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:55.392255+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7240 sent 7239 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:25.121671+0000 osd.2 (osd.2) 7240 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6648> 2026-01-22T15:36:26.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:56.392552+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7241 sent 7240 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:26.080120+0000 osd.2 (osd.2) 7241 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6639> 2026-01-22T15:36:27.116+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:57.886466+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7242 sent 7241 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:27.116980+0000 osd.2 (osd.2) 7242 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7240) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:25.121671+0000 osd.2 (osd.2) 7240 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6628> 2026-01-22T15:36:28.150+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:58.887299+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7243 sent 7242 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:28.150380+0000 osd.2 (osd.2) 7243 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6615> 2026-01-22T15:36:29.200+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7241) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:26.080120+0000 osd.2 (osd.2) 7241 : cluster [WRN] 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7242) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:27.116980+0000 osd.2 (osd.2) 7242 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7243) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:28.150380+0000 osd.2 (osd.2) 7243 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:35:59.887528+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7244 sent 7243 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:29.201205+0000 osd.2 (osd.2) 7244 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6599> 2026-01-22T15:36:30.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7244) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:29.201205+0000 osd.2 (osd.2) 7244 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:00.887719+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7245 sent 7244 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:30.159537+0000 osd.2 (osd.2) 7245 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6588> 2026-01-22T15:36:31.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.5 total, 600.0 interval
                                           Cumulative writes: 14K writes, 44K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                           Cumulative WAL: 14K writes, 4846 syncs, 2.94 writes per sync, written: 0.03 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 636 writes, 1117 keys, 636 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 636 writes, 315 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7245) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:30.159537+0000 osd.2 (osd.2) 7245 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:01.888061+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7246 sent 7245 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:31.128937+0000 osd.2 (osd.2) 7246 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6575> 2026-01-22T15:36:32.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:02.888349+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7247 sent 7246 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:32.157506+0000 osd.2 (osd.2) 7247 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6566> 2026-01-22T15:36:33.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:03.888639+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7248 sent 7247 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:33.108765+0000 osd.2 (osd.2) 7248 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7246) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:31.128937+0000 osd.2 (osd.2) 7246 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6552> 2026-01-22T15:36:34.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7247) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:32.157506+0000 osd.2 (osd.2) 7247 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7248) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:33.108765+0000 osd.2 (osd.2) 7248 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:04.888854+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7249 sent 7248 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:34.072790+0000 osd.2 (osd.2) 7249 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6539> 2026-01-22T15:36:35.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:05.889080+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7250 sent 7249 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:35.113758+0000 osd.2 (osd.2) 7250 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6528> 2026-01-22T15:36:36.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7249) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:34.072790+0000 osd.2 (osd.2) 7249 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:06.889281+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7251 sent 7250 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:36.121018+0000 osd.2 (osd.2) 7251 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6517> 2026-01-22T15:36:37.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7250) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:35.113758+0000 osd.2 (osd.2) 7250 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7251) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:36.121018+0000 osd.2 (osd.2) 7251 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:07.889540+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7252 sent 7251 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:37.121277+0000 osd.2 (osd.2) 7252 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6504> 2026-01-22T15:36:38.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735dbd3c00 session 0x55735b5054a0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735ca5f800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:08.889750+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7253 sent 7252 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:38.092708+0000 osd.2 (osd.2) 7253 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7252) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:37.121277+0000 osd.2 (osd.2) 7252 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6488> 2026-01-22T15:36:39.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,2,2,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:09.889954+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7254 sent 7253 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:39.119559+0000 osd.2 (osd.2) 7254 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6478> 2026-01-22T15:36:40.167+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7253) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:38.092708+0000 osd.2 (osd.2) 7253 : cluster [WRN] 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7254) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:39.119559+0000 osd.2 (osd.2) 7254 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:10.890149+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7255 sent 7254 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:40.168601+0000 osd.2 (osd.2) 7255 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6465> 2026-01-22T15:36:41.181+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:11.890301+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7256 sent 7255 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:41.183199+0000 osd.2 (osd.2) 7256 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6456> 2026-01-22T15:36:42.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7255) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:40.168601+0000 osd.2 (osd.2) 7255 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7256) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:41.183199+0000 osd.2 (osd.2) 7256 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:12.890573+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7257 sent 7256 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:42.228122+0000 osd.2 (osd.2) 7257 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6443> 2026-01-22T15:36:43.199+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7257) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:42.228122+0000 osd.2 (osd.2) 7257 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:13.890734+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7258 sent 7257 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:43.200759+0000 osd.2 (osd.2) 7258 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6429> 2026-01-22T15:36:44.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:14.890927+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7259 sent 7258 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:44.228435+0000 osd.2 (osd.2) 7259 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7258) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:43.200759+0000 osd.2 (osd.2) 7258 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: mgrc ms_handle_reset ms_handle_reset con 0x55735a80bc00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1334415348
Jan 22 15:45:44 compute-2 ceph-osd[79779]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1334415348,v1:192.168.122.100:6801/1334415348]
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: get_auth_request con 0x55735dc39000 auth_method 0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: mgrc handle_mgr_configure stats_period=5
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6413> 2026-01-22T15:36:45.251+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,3,11,35,40,67,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:15.891124+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7260 sent 7259 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:45.252399+0000 osd.2 (osd.2) 7260 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7259) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:44.228435+0000 osd.2 (osd.2) 7259 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7260) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:45.252399+0000 osd.2 (osd.2) 7260 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6399> 2026-01-22T15:36:46.232+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c5cb800 session 0x55735a69a000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735bf02c00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:16.891408+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7261 sent 7260 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:46.233366+0000 osd.2 (osd.2) 7261 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7261) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:46.233366+0000 osd.2 (osd.2) 7261 : cluster [WRN] 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6386> 2026-01-22T15:36:47.213+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:17.891544+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7262 sent 7261 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:47.214965+0000 osd.2 (osd.2) 7262 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7262) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:47.214965+0000 osd.2 (osd.2) 7262 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6375> 2026-01-22T15:36:48.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:18.891741+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7263 sent 7262 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:48.227783+0000 osd.2 (osd.2) 7263 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7263) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:48.227783+0000 osd.2 (osd.2) 7263 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6361> 2026-01-22T15:36:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,2,0,1,0,1,1,3,11,35,39,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:19.891951+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7264 sent 7263 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:49.223471+0000 osd.2 (osd.2) 7264 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7264) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:49.223471+0000 osd.2 (osd.2) 7264 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6349> 2026-01-22T15:36:50.208+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:20.892236+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7265 sent 7264 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:50.209577+0000 osd.2 (osd.2) 7265 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,1,1,3,11,35,39,68,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6339> 2026-01-22T15:36:51.180+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7265) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:50.209577+0000 osd.2 (osd.2) 7265 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:21.892539+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7266 sent 7265 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:51.182260+0000 osd.2 (osd.2) 7266 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6328> 2026-01-22T15:36:52.193+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7266) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:51.182260+0000 osd.2 (osd.2) 7266 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:22.892789+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7267 sent 7266 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:52.195039+0000 osd.2 (osd.2) 7267 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6317> 2026-01-22T15:36:53.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7267) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:52.195039+0000 osd.2 (osd.2) 7267 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,1,1,3,11,35,39,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:23.893018+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7268 sent 7267 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:53.158605+0000 osd.2 (osd.2) 7268 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6302> 2026-01-22T15:36:54.142+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7268) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:53.158605+0000 osd.2 (osd.2) 7268 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:24.893238+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7269 sent 7268 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:54.143450+0000 osd.2 (osd.2) 7269 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,1,3,11,35,39,68,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6290> 2026-01-22T15:36:55.169+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7269) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:54.143450+0000 osd.2 (osd.2) 7269 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207331328 unmapped: 9322496 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:25.893424+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7270 sent 7269 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:55.170787+0000 osd.2 (osd.2) 7270 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6279> 2026-01-22T15:36:56.151+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7270) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:55.170787+0000 osd.2 (osd.2) 7270 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207331328 unmapped: 9322496 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 143.398162842s of 148.265136719s, submitted: 21
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:26.893617+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7271 sent 7270 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:56.152684+0000 osd.2 (osd.2) 7271 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6267> 2026-01-22T15:36:57.163+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7271) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:56.152684+0000 osd.2 (osd.2) 7271 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 9281536 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:27.893798+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7272 sent 7271 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:57.164655+0000 osd.2 (osd.2) 7272 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6256> 2026-01-22T15:36:58.147+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7272) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:57.164655+0000 osd.2 (osd.2) 7272 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 9281536 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:28.893985+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7273 sent 7272 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:58.148768+0000 osd.2 (osd.2) 7273 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6242> 2026-01-22T15:36:59.137+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7273) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:58.148768+0000 osd.2 (osd.2) 7273 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 9281536 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:29.894212+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7274 sent 7273 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:36:59.138978+0000 osd.2 (osd.2) 7274 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6231> 2026-01-22T15:37:00.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,1,1,3,11,35,39,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 9273344 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7274) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:36:59.138978+0000 osd.2 (osd.2) 7274 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:30.894720+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7275 sent 7274 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:00.110974+0000 osd.2 (osd.2) 7275 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6219> 2026-01-22T15:37:01.134+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 9273344 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7275) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:00.110974+0000 osd.2 (osd.2) 7275 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:31.894919+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7276 sent 7275 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:01.136041+0000 osd.2 (osd.2) 7276 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6208> 2026-01-22T15:37:02.178+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 9265152 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735cee5000 session 0x55735c6232c0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735f235c00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7276) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:01.136041+0000 osd.2 (osd.2) 7276 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:32.895137+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7277 sent 7276 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:02.180179+0000 osd.2 (osd.2) 7277 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6195> 2026-01-22T15:37:03.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 9166848 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:33.895336+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7278 sent 7277 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:03.168951+0000 osd.2 (osd.2) 7278 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7277) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:02.180179+0000 osd.2 (osd.2) 7277 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6181> 2026-01-22T15:37:04.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207503360 unmapped: 9150464 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:34.895492+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7279 sent 7278 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:04.164664+0000 osd.2 (osd.2) 7279 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7278) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:03.168951+0000 osd.2 (osd.2) 7278 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7279) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:04.164664+0000 osd.2 (osd.2) 7279 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6168> 2026-01-22T15:37:05.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:35.895663+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7280 sent 7279 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:05.122985+0000 osd.2 (osd.2) 7280 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7280) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:05.122985+0000 osd.2 (osd.2) 7280 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6157> 2026-01-22T15:37:06.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,0,2,3,11,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:36.895868+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7281 sent 7280 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:06.088768+0000 osd.2 (osd.2) 7281 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6147> 2026-01-22T15:37:07.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7281) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:06.088768+0000 osd.2 (osd.2) 7281 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:37.896084+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7282 sent 7281 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:07.074227+0000 osd.2 (osd.2) 7282 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6136> 2026-01-22T15:37:08.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7282) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:07.074227+0000 osd.2 (osd.2) 7282 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:38.896289+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7283 sent 7282 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:08.026805+0000 osd.2 (osd.2) 7283 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6122> 2026-01-22T15:37:09.036+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7283) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:08.026805+0000 osd.2 (osd.2) 7283 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:39.896510+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7284 sent 7283 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:09.036883+0000 osd.2 (osd.2) 7284 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6111> 2026-01-22T15:37:10.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7284) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:09.036883+0000 osd.2 (osd.2) 7284 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:40.896708+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7285 sent 7284 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:10.067060+0000 osd.2 (osd.2) 7285 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6100> 2026-01-22T15:37:11.074+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7285) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:10.067060+0000 osd.2 (osd.2) 7285 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,0,1,4,11,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:41.896981+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7286 sent 7285 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:11.074655+0000 osd.2 (osd.2) 7286 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6088> 2026-01-22T15:37:12.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7286) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:11.074655+0000 osd.2 (osd.2) 7286 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,1,1,4,11,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:42.897204+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7287 sent 7286 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:12.102585+0000 osd.2 (osd.2) 7287 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6076> 2026-01-22T15:37:13.062+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7287) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:12.102585+0000 osd.2 (osd.2) 7287 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:43.897383+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7288 sent 7287 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:13.063132+0000 osd.2 (osd.2) 7288 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6062> 2026-01-22T15:37:14.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7288) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:13.063132+0000 osd.2 (osd.2) 7288 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:44.897561+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7289 sent 7288 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:14.098895+0000 osd.2 (osd.2) 7289 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6051> 2026-01-22T15:37:15.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7289) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:14.098895+0000 osd.2 (osd.2) 7289 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:45.897738+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7290 sent 7289 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:15.148734+0000 osd.2 (osd.2) 7290 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6040> 2026-01-22T15:37:16.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7290) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:15.148734+0000 osd.2 (osd.2) 7290 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 9093120 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:46.897903+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7291 sent 7290 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:16.123004+0000 osd.2 (osd.2) 7291 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6029> 2026-01-22T15:37:17.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7291) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:16.123004+0000 osd.2 (osd.2) 7291 : cluster [WRN] 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 9093120 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:47.898103+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7292 sent 7291 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:17.165582+0000 osd.2 (osd.2) 7292 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6018> 2026-01-22T15:37:18.174+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,1,1,3,12,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7292) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:17.165582+0000 osd.2 (osd.2) 7292 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:48.898359+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7293 sent 7292 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:18.175944+0000 osd.2 (osd.2) 7293 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,1,1,3,12,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -6002> 2026-01-22T15:37:19.125+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7293) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:18.175944+0000 osd.2 (osd.2) 7293 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:49.898565+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7294 sent 7293 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:19.126775+0000 osd.2 (osd.2) 7294 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5991> 2026-01-22T15:37:20.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7294) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:19.126775+0000 osd.2 (osd.2) 7294 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:50.898899+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7295 sent 7294 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:20.123789+0000 osd.2 (osd.2) 7295 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5980> 2026-01-22T15:37:21.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7295) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:20.123789+0000 osd.2 (osd.2) 7295 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,12,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:51.899124+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7296 sent 7295 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:21.116521+0000 osd.2 (osd.2) 7296 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5967> 2026-01-22T15:37:22.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,12,34,40,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7296) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:21.116521+0000 osd.2 (osd.2) 7296 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:52.899377+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7297 sent 7296 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:22.129926+0000 osd.2 (osd.2) 7297 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5955> 2026-01-22T15:37:23.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7297) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:22.129926+0000 osd.2 (osd.2) 7297 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:53.899568+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7298 sent 7297 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:23.160781+0000 osd.2 (osd.2) 7298 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5941> 2026-01-22T15:37:24.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7298) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:23.160781+0000 osd.2 (osd.2) 7298 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:54.899806+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7299 sent 7298 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:24.119884+0000 osd.2 (osd.2) 7299 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5930> 2026-01-22T15:37:25.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7299) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:24.119884+0000 osd.2 (osd.2) 7299 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:55.899986+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7300 sent 7299 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:25.100178+0000 osd.2 (osd.2) 7300 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5919> 2026-01-22T15:37:26.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735caef800 session 0x55735d038d20
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5cd000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:56.900223+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7301 sent 7300 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:26.068766+0000 osd.2 (osd.2) 7301 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5908> 2026-01-22T15:37:27.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,1,3,1,3,12,33,41,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:57.900416+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7302 sent 7301 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:27.042999+0000 osd.2 (osd.2) 7302 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5898> 2026-01-22T15:37:28.002+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7300) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:25.100178+0000 osd.2 (osd.2) 7300 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:58.900637+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7303 sent 7302 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:28.004776+0000 osd.2 (osd.2) 7303 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5884> 2026-01-22T15:37:29.043+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7301) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:26.068766+0000 osd.2 (osd.2) 7301 : cluster [WRN] 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7302) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:27.042999+0000 osd.2 (osd.2) 7302 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7303) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:28.004776+0000 osd.2 (osd.2) 7303 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,1,3,1,3,12,33,41,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:36:59.900904+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7304 sent 7303 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:29.045186+0000 osd.2 (osd.2) 7304 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5868> 2026-01-22T15:37:30.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7304) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:29.045186+0000 osd.2 (osd.2) 7304 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:00.901125+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7305 sent 7304 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:30.081769+0000 osd.2 (osd.2) 7305 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5857> 2026-01-22T15:37:31.069+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7305) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:30.081769+0000 osd.2 (osd.2) 7305 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:01.901357+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7306 sent 7305 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:31.071493+0000 osd.2 (osd.2) 7306 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5846> 2026-01-22T15:37:32.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7306) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:31.071493+0000 osd.2 (osd.2) 7306 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,3,1,3,12,33,41,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:02.901542+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7307 sent 7306 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:32.028403+0000 osd.2 (osd.2) 7307 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5834> 2026-01-22T15:37:33.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7307) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:32.028403+0000 osd.2 (osd.2) 7307 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:03.901708+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7308 sent 7307 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:33.027913+0000 osd.2 (osd.2) 7308 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5820> 2026-01-22T15:37:34.044+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7308) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:33.027913+0000 osd.2 (osd.2) 7308 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:04.901906+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7309 sent 7308 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:34.045987+0000 osd.2 (osd.2) 7309 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5809> 2026-01-22T15:37:35.011+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7309) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:34.045987+0000 osd.2 (osd.2) 7309 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,1,3,12,33,41,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:05.902089+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7310 sent 7309 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:35.013012+0000 osd.2 (osd.2) 7310 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5797> 2026-01-22T15:37:35.989+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7310) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:35.013012+0000 osd.2 (osd.2) 7310 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:06.902363+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7311 sent 7310 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:35.990944+0000 osd.2 (osd.2) 7311 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5786> 2026-01-22T15:37:36.960+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7311) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:35.990944+0000 osd.2 (osd.2) 7311 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:07.902563+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7312 sent 7311 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:36.962275+0000 osd.2 (osd.2) 7312 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5775> 2026-01-22T15:37:38.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7312) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:36.962275+0000 osd.2 (osd.2) 7312 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:08.902759+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7313 sent 7312 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:38.011177+0000 osd.2 (osd.2) 7313 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5761> 2026-01-22T15:37:38.994+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,1,3,12,33,41,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7313) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:38.011177+0000 osd.2 (osd.2) 7313 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:09.902902+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7314 sent 7313 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:38.995039+0000 osd.2 (osd.2) 7314 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5749> 2026-01-22T15:37:39.981+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7314) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:38.995039+0000 osd.2 (osd.2) 7314 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:10.903113+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7315 sent 7314 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:39.981736+0000 osd.2 (osd.2) 7315 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5738> 2026-01-22T15:37:40.978+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:11.903283+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7316 sent 7315 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:40.978759+0000 osd.2 (osd.2) 7316 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7315) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:39.981736+0000 osd.2 (osd.2) 7315 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5727> 2026-01-22T15:37:41.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:12.903488+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7317 sent 7316 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:41.992294+0000 osd.2 (osd.2) 7317 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5718> 2026-01-22T15:37:42.952+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7316) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:40.978759+0000 osd.2 (osd.2) 7316 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7317) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:41.992294+0000 osd.2 (osd.2) 7317 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:13.903673+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7318 sent 7317 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:42.952879+0000 osd.2 (osd.2) 7318 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5702> 2026-01-22T15:37:43.915+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,2,3,12,33,41,68,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:14.903979+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7319 sent 7318 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:43.915998+0000 osd.2 (osd.2) 7319 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5692> 2026-01-22T15:37:44.943+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7318) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:42.952879+0000 osd.2 (osd.2) 7318 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7319) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:43.915998+0000 osd.2 (osd.2) 7319 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5685> 2026-01-22T15:37:45.903+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:15.904172+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7320 sent 7319 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:44.944481+0000 osd.2 (osd.2) 7320 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7320) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:44.944481+0000 osd.2 (osd.2) 7320 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5674> 2026-01-22T15:37:46.885+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:16.904387+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7322 sent 7320 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:45.904460+0000 osd.2 (osd.2) 7321 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:46.886512+0000 osd.2 (osd.2) 7322 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7322) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:45.904460+0000 osd.2 (osd.2) 7321 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:46.886512+0000 osd.2 (osd.2) 7322 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:17.904600+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5658> 2026-01-22T15:37:47.935+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:18.904772+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7323 sent 7322 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:47.936021+0000 osd.2 (osd.2) 7323 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5646> 2026-01-22T15:37:48.944+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7323) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:47.936021+0000 osd.2 (osd.2) 7323 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:19.904973+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7324 sent 7323 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:48.944708+0000 osd.2 (osd.2) 7324 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5635> 2026-01-22T15:37:49.953+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7324) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:48.944708+0000 osd.2 (osd.2) 7324 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,2,3,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:20.905140+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7325 sent 7324 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:49.954261+0000 osd.2 (osd.2) 7325 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5623> 2026-01-22T15:37:50.996+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7325) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:49.954261+0000 osd.2 (osd.2) 7325 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:21.905409+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7326 sent 7325 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:50.997007+0000 osd.2 (osd.2) 7326 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5612> 2026-01-22T15:37:51.980+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7326) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:50.997007+0000 osd.2 (osd.2) 7326 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:22.905601+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7327 sent 7326 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:51.981782+0000 osd.2 (osd.2) 7327 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5601> 2026-01-22T15:37:53.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7327) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:51.981782+0000 osd.2 (osd.2) 7327 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:23.905848+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7328 sent 7327 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:53.017354+0000 osd.2 (osd.2) 7328 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5587> 2026-01-22T15:37:54.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7328) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:53.017354+0000 osd.2 (osd.2) 7328 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:24.906104+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7329 sent 7328 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:54.028657+0000 osd.2 (osd.2) 7329 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5576> 2026-01-22T15:37:54.998+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211550208 unmapped: 5103616 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:25.906350+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7330 sent 7329 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:54.999486+0000 osd.2 (osd.2) 7330 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5567> 2026-01-22T15:37:56.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7329) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:54.028657+0000 osd.2 (osd.2) 7329 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,4,3,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211550208 unmapped: 5103616 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,4,3,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:26.906532+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7331 sent 7330 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:56.006227+0000 osd.2 (osd.2) 7331 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5554> 2026-01-22T15:37:56.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7330) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:54.999486+0000 osd.2 (osd.2) 7330 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7331) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:56.006227+0000 osd.2 (osd.2) 7331 : cluster [WRN] 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 5095424 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:27.906698+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7332 sent 7331 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:57.001361+0000 osd.2 (osd.2) 7332 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5541> 2026-01-22T15:37:58.023+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5f7800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 5095424 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:28.906866+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7333 sent 7332 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:58.024613+0000 osd.2 (osd.2) 7333 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5528> 2026-01-22T15:37:59.014+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 60.944828033s of 62.401416779s, submitted: 329
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7332) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:57.001361+0000 osd.2 (osd.2) 7332 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 21782528 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:29.907053+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7334 sent 7333 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:37:59.015232+0000 osd.2 (osd.2) 7334 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5516> 2026-01-22T15:38:00.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 21782528 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:30.907470+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7335 sent 7334 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:00.017885+0000 osd.2 (osd.2) 7335 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7333) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:58.024613+0000 osd.2 (osd.2) 7333 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7334) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:37:59.015232+0000 osd.2 (osd.2) 7334 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5503> 2026-01-22T15:38:01.024+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _renew_subs
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 181 ms_handle_reset con 0x55735c5f7800 session 0x55735d3912c0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211664896 unmapped: 21774336 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7335) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:00.017885+0000 osd.2 (osd.2) 7335 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:31.907671+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7336 sent 7335 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:01.026298+0000 osd.2 (osd.2) 7336 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5487> 2026-01-22T15:38:01.992+0000 7f47f8ed4640 -1 osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 181 heartbeat osd_stat(store_statfs(0x1b0768000/0x0/0x1bfc00000, data 0xc414729/0xb2f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,3,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c33dc00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211681280 unmapped: 21757952 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:32.907889+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7337 sent 7336 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:01.993874+0000 osd.2 (osd.2) 7337 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7336) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:01.026298+0000 osd.2 (osd.2) 7336 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: get_auth_request con 0x55735a6e4800 auth_method 0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5472> 2026-01-22T15:38:02.995+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211705856 unmapped: 21733376 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2813337 data_alloc: 218103808 data_used: 13582336
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:33.908094+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7338 sent 7337 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:02.996645+0000 osd.2 (osd.2) 7338 : cluster [WRN] 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5460> 2026-01-22T15:38:03.988+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7337) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:01.993874+0000 osd.2 (osd.2) 7337 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7338) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:02.996645+0000 osd.2 (osd.2) 7338 : cluster [WRN] 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 ms_handle_reset con 0x55735c33dc00 session 0x55735ad96960
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:34.908294+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7339 sent 7338 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:03.989805+0000 osd.2 (osd.2) 7339 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5446> 2026-01-22T15:38:04.958+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7339) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:03.989805+0000 osd.2 (osd.2) 7339 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:35.908587+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7340 sent 7339 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:04.960014+0000 osd.2 (osd.2) 7340 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5435> 2026-01-22T15:38:06.002+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7340) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:04.960014+0000 osd.2 (osd.2) 7340 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:36.908760+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7341 sent 7340 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:06.004094+0000 osd.2 (osd.2) 7341 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5424> 2026-01-22T15:38:06.963+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7341) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:06.004094+0000 osd.2 (osd.2) 7341 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:37.909005+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7342 sent 7341 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:06.965337+0000 osd.2 (osd.2) 7342 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5413> 2026-01-22T15:38:07.955+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 heartbeat osd_stat(store_statfs(0x1b13d5000/0x0/0x1bfc00000, data 0xb7a63f3/0xa688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,3,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 heartbeat osd_stat(store_statfs(0x1b13d5000/0x0/0x1bfc00000, data 0xb7a63f3/0xa688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211812352 unmapped: 21626880 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2729241 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:38.909341+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7343 sent 7342 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:07.956566+0000 osd.2 (osd.2) 7343 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5399> 2026-01-22T15:38:08.986+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7342) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:06.965337+0000 osd.2 (osd.2) 7342 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.054518700s of 10.458124161s, submitted: 50
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: get_auth_request con 0x55735b5cb400 auth_method 0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211820544 unmapped: 21618688 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:39.909531+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7344 sent 7343 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:08.988095+0000 osd.2 (osd.2) 7344 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5385> 2026-01-22T15:38:09.988+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7343) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:07.956566+0000 osd.2 (osd.2) 7343 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7344) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:08.988095+0000 osd.2 (osd.2) 7344 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211820544 unmapped: 21618688 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:40.909705+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7345 sent 7344 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:09.989646+0000 osd.2 (osd.2) 7345 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5371> 2026-01-22T15:38:11.019+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7345) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:09.989646+0000 osd.2 (osd.2) 7345 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211820544 unmapped: 21618688 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:41.909865+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7346 sent 7345 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:11.020610+0000 osd.2 (osd.2) 7346 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5360> 2026-01-22T15:38:12.006+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211836928 unmapped: 21602304 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:42.910095+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7347 sent 7346 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:12.008298+0000 osd.2 (osd.2) 7347 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7346) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:11.020610+0000 osd.2 (osd.2) 7346 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5349> 2026-01-22T15:38:12.989+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211836928 unmapped: 21602304 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:43.910344+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7348 sent 7347 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:12.990899+0000 osd.2 (osd.2) 7348 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7347) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:12.008298+0000 osd.2 (osd.2) 7347 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5335> 2026-01-22T15:38:13.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:44.910555+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7349 sent 7348 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:13.959406+0000 osd.2 (osd.2) 7349 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5325> 2026-01-22T15:38:14.992+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7348) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:12.990899+0000 osd.2 (osd.2) 7348 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7349) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:13.959406+0000 osd.2 (osd.2) 7349 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:45.910797+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7350 sent 7349 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:14.993491+0000 osd.2 (osd.2) 7350 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5312> 2026-01-22T15:38:16.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7350) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:14.993491+0000 osd.2 (osd.2) 7350 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:46.911028+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7351 sent 7350 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:16.014522+0000 osd.2 (osd.2) 7351 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5301> 2026-01-22T15:38:17.020+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7351) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:16.014522+0000 osd.2 (osd.2) 7351 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:47.911213+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7352 sent 7351 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:17.021128+0000 osd.2 (osd.2) 7352 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5290> 2026-01-22T15:38:17.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7352) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:17.021128+0000 osd.2 (osd.2) 7352 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:48.911428+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7353 sent 7352 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:17.981619+0000 osd.2 (osd.2) 7353 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5275> 2026-01-22T15:38:19.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7353) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:17.981619+0000 osd.2 (osd.2) 7353 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:49.911608+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7354 sent 7353 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:19.012104+0000 osd.2 (osd.2) 7354 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5264> 2026-01-22T15:38:19.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7354) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:19.012104+0000 osd.2 (osd.2) 7354 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:50.911812+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7355 sent 7354 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:19.982036+0000 osd.2 (osd.2) 7355 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5253> 2026-01-22T15:38:20.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:51.912003+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7356 sent 7355 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:20.938107+0000 osd.2 (osd.2) 7356 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7355) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:19.982036+0000 osd.2 (osd.2) 7355 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5241> 2026-01-22T15:38:21.968+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:52.912184+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7357 sent 7356 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:21.968769+0000 osd.2 (osd.2) 7357 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7356) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:20.938107+0000 osd.2 (osd.2) 7356 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5230> 2026-01-22T15:38:23.012+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:53.912406+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7358 sent 7357 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:23.013290+0000 osd.2 (osd.2) 7358 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7357) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:21.968769+0000 osd.2 (osd.2) 7357 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5216> 2026-01-22T15:38:23.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:54.912616+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7359 sent 7358 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:23.983663+0000 osd.2 (osd.2) 7359 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7358) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:23.013290+0000 osd.2 (osd.2) 7358 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7359) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:23.983663+0000 osd.2 (osd.2) 7359 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5202> 2026-01-22T15:38:24.938+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:55.912828+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7360 sent 7359 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:24.938772+0000 osd.2 (osd.2) 7360 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7360) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:24.938772+0000 osd.2 (osd.2) 7360 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5191> 2026-01-22T15:38:25.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:56.913033+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7361 sent 7360 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:25.981617+0000 osd.2 (osd.2) 7361 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7361) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:25.981617+0000 osd.2 (osd.2) 7361 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5180> 2026-01-22T15:38:27.005+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:57.913247+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7362 sent 7361 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:27.006059+0000 osd.2 (osd.2) 7362 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7362) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:27.006059+0000 osd.2 (osd.2) 7362 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5169> 2026-01-22T15:38:28.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:58.913463+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7363 sent 7362 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:28.037499+0000 osd.2 (osd.2) 7363 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5156> 2026-01-22T15:38:29.022+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7363) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:28.037499+0000 osd.2 (osd.2) 7363 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:37:59.913630+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7364 sent 7363 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:29.023504+0000 osd.2 (osd.2) 7364 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5144> 2026-01-22T15:38:30.051+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7364) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:29.023504+0000 osd.2 (osd.2) 7364 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:00.913804+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7365 sent 7364 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:30.052938+0000 osd.2 (osd.2) 7365 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5133> 2026-01-22T15:38:31.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7365) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:30.052938+0000 osd.2 (osd.2) 7365 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:01.914053+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7366 sent 7365 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:31.049244+0000 osd.2 (osd.2) 7366 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5122> 2026-01-22T15:38:32.028+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7366) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:31.049244+0000 osd.2 (osd.2) 7366 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:02.914256+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7367 sent 7366 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:32.030044+0000 osd.2 (osd.2) 7367 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5111> 2026-01-22T15:38:33.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7367) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:32.030044+0000 osd.2 (osd.2) 7367 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:03.914451+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7368 sent 7367 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:33.012659+0000 osd.2 (osd.2) 7368 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5097> 2026-01-22T15:38:34.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7368) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:33.012659+0000 osd.2 (osd.2) 7368 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:04.914640+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7369 sent 7368 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:34.060739+0000 osd.2 (osd.2) 7369 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5086> 2026-01-22T15:38:35.053+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7369) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:34.060739+0000 osd.2 (osd.2) 7369 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,4,12,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:05.914913+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7370 sent 7369 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:35.055035+0000 osd.2 (osd.2) 7370 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5074> 2026-01-22T15:38:36.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:06.915178+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7371 sent 7370 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:36.023035+0000 osd.2 (osd.2) 7371 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5065> 2026-01-22T15:38:37.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7370) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:35.055035+0000 osd.2 (osd.2) 7370 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:07.915406+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7372 sent 7371 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:37.049016+0000 osd.2 (osd.2) 7372 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5054> 2026-01-22T15:38:38.009+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7371) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:36.023035+0000 osd.2 (osd.2) 7371 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7372) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:37.049016+0000 osd.2 (osd.2) 7372 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:08.915615+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7373 sent 7372 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:38.010803+0000 osd.2 (osd.2) 7373 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5038> 2026-01-22T15:38:39.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7373) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:38.010803+0000 osd.2 (osd.2) 7373 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:09.915788+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7374 sent 7373 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:39.032923+0000 osd.2 (osd.2) 7374 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5027> 2026-01-22T15:38:40.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7374) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:39.032923+0000 osd.2 (osd.2) 7374 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:10.916021+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7375 sent 7374 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:40.071792+0000 osd.2 (osd.2) 7375 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5016> 2026-01-22T15:38:41.114+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7375) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:40.071792+0000 osd.2 (osd.2) 7375 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,4,12,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:11.916235+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7376 sent 7375 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:41.115517+0000 osd.2 (osd.2) 7376 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -5004> 2026-01-22T15:38:42.158+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7376) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:41.115517+0000 osd.2 (osd.2) 7376 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:12.916651+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7377 sent 7376 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:42.159431+0000 osd.2 (osd.2) 7377 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4993> 2026-01-22T15:38:43.159+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7377) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:42.159431+0000 osd.2 (osd.2) 7377 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:13.916893+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7378 sent 7377 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:43.161201+0000 osd.2 (osd.2) 7378 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4979> 2026-01-22T15:38:44.124+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7378) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:43.161201+0000 osd.2 (osd.2) 7378 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:14.917161+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7379 sent 7378 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:44.125734+0000 osd.2 (osd.2) 7379 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4968> 2026-01-22T15:38:45.092+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7379) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:44.125734+0000 osd.2 (osd.2) 7379 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:15.917418+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7380 sent 7379 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:45.093655+0000 osd.2 (osd.2) 7380 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4957> 2026-01-22T15:38:46.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7380) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:45.093655+0000 osd.2 (osd.2) 7380 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:16.917692+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7381 sent 7380 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:46.132072+0000 osd.2 (osd.2) 7381 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4946> 2026-01-22T15:38:47.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,3,13,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7381) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:46.132072+0000 osd.2 (osd.2) 7381 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:17.917914+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7382 sent 7381 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:47.091595+0000 osd.2 (osd.2) 7382 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4934> 2026-01-22T15:38:48.058+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:18.918125+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7383 sent 7382 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:48.060253+0000 osd.2 (osd.2) 7383 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4925> 2026-01-22T15:38:49.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7382) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:47.091595+0000 osd.2 (osd.2) 7382 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:19.918346+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7384 sent 7383 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:49.038470+0000 osd.2 (osd.2) 7384 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4911> 2026-01-22T15:38:50.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7383) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:48.060253+0000 osd.2 (osd.2) 7383 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7384) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:49.038470+0000 osd.2 (osd.2) 7384 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:20.918543+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7385 sent 7384 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:50.031818+0000 osd.2 (osd.2) 7385 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4898> 2026-01-22T15:38:51.034+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7385) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:50.031818+0000 osd.2 (osd.2) 7385 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:21.918743+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7386 sent 7385 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:51.035877+0000 osd.2 (osd.2) 7386 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4887> 2026-01-22T15:38:52.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7386) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:51.035877+0000 osd.2 (osd.2) 7386 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:22.918915+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7387 sent 7386 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:52.032035+0000 osd.2 (osd.2) 7387 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4876> 2026-01-22T15:38:53.024+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,13,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:23.919048+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7388 sent 7387 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:53.024401+0000 osd.2 (osd.2) 7388 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7387) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:52.032035+0000 osd.2 (osd.2) 7387 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4864> 2026-01-22T15:38:54.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,13,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:24.919250+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7389 sent 7388 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:54.014790+0000 osd.2 (osd.2) 7389 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4851> 2026-01-22T15:38:55.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7388) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:53.024401+0000 osd.2 (osd.2) 7388 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7389) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:54.014790+0000 osd.2 (osd.2) 7389 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,3,14,32,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:25.919497+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7390 sent 7389 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:55.021662+0000 osd.2 (osd.2) 7390 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4837> 2026-01-22T15:38:55.984+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7390) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:55.021662+0000 osd.2 (osd.2) 7390 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:26.919703+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7391 sent 7390 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:55.984443+0000 osd.2 (osd.2) 7391 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4827> 2026-01-22T15:38:56.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,3,13,33,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7391) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:55.984443+0000 osd.2 (osd.2) 7391 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4821> 2026-01-22T15:38:57.890+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:27.919921+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7393 sent 7391 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:56.936947+0000 osd.2 (osd.2) 7392 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:57.890696+0000 osd.2 (osd.2) 7393 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4811> 2026-01-22T15:38:58.844+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:28.920229+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7394 sent 7393 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:58.844828+0000 osd.2 (osd.2) 7394 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7393) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:56.936947+0000 osd.2 (osd.2) 7392 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:57.890696+0000 osd.2 (osd.2) 7393 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4796> 2026-01-22T15:38:59.887+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:29.920496+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7395 sent 7394 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:38:59.887830+0000 osd.2 (osd.2) 7395 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7394) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:58.844828+0000 osd.2 (osd.2) 7394 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7395) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:38:59.887830+0000 osd.2 (osd.2) 7395 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:30.920731+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4780> 2026-01-22T15:39:00.923+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4777> 2026-01-22T15:39:01.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:31.920926+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7397 sent 7395 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:00.924116+0000 osd.2 (osd.2) 7396 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:01.895654+0000 osd.2 (osd.2) 7397 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7397) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:00.924116+0000 osd.2 (osd.2) 7396 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:01.895654+0000 osd.2 (osd.2) 7397 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4764> 2026-01-22T15:39:02.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:32.921120+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7398 sent 7397 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:02.893551+0000 osd.2 (osd.2) 7398 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,33,41,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7398) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:02.893551+0000 osd.2 (osd.2) 7398 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4752> 2026-01-22T15:39:03.847+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:33.921394+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7399 sent 7398 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:03.848289+0000 osd.2 (osd.2) 7399 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7399) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:03.848289+0000 osd.2 (osd.2) 7399 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4738> 2026-01-22T15:39:04.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:34.921674+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7400 sent 7399 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:04.855120+0000 osd.2 (osd.2) 7400 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7400) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:04.855120+0000 osd.2 (osd.2) 7400 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4727> 2026-01-22T15:39:05.831+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:35.921931+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7401 sent 7400 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:05.832700+0000 osd.2 (osd.2) 7401 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7401) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:05.832700+0000 osd.2 (osd.2) 7401 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4716> 2026-01-22T15:39:06.803+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:36.922190+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7402 sent 7401 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:06.805080+0000 osd.2 (osd.2) 7402 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4707> 2026-01-22T15:39:07.765+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7402) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:06.805080+0000 osd.2 (osd.2) 7402 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:37.922377+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7403 sent 7402 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:07.767060+0000 osd.2 (osd.2) 7403 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,32,42,69,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4695> 2026-01-22T15:39:08.724+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:38.922548+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7404 sent 7403 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:08.725389+0000 osd.2 (osd.2) 7404 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7403) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:07.767060+0000 osd.2 (osd.2) 7403 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4681> 2026-01-22T15:39:09.742+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,32,42,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:39.923098+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7405 sent 7404 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:09.744024+0000 osd.2 (osd.2) 7405 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7404) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:08.725389+0000 osd.2 (osd.2) 7404 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7405) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:09.744024+0000 osd.2 (osd.2) 7405 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4667> 2026-01-22T15:39:10.762+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:40.923397+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7406 sent 7405 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:10.763708+0000 osd.2 (osd.2) 7406 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7406) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:10.763708+0000 osd.2 (osd.2) 7406 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,32,42,69,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4655> 2026-01-22T15:39:11.801+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:41.923611+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7407 sent 7406 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:11.802428+0000 osd.2 (osd.2) 7407 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7407) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:11.802428+0000 osd.2 (osd.2) 7407 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4644> 2026-01-22T15:39:12.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:42.923835+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7408 sent 7407 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:12.794134+0000 osd.2 (osd.2) 7408 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7408) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:12.794134+0000 osd.2 (osd.2) 7408 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4633> 2026-01-22T15:39:13.761+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:43.924051+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7409 sent 7408 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:13.763161+0000 osd.2 (osd.2) 7409 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7409) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:13.763161+0000 osd.2 (osd.2) 7409 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4619> 2026-01-22T15:39:14.752+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:44.924248+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7410 sent 7409 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:14.754218+0000 osd.2 (osd.2) 7410 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7410) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:14.754218+0000 osd.2 (osd.2) 7410 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4608> 2026-01-22T15:39:15.704+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:45.924428+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7411 sent 7410 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:15.705863+0000 osd.2 (osd.2) 7411 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4599> 2026-01-22T15:39:16.746+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,6,13,30,44,69,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:46.924637+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7412 sent 7411 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:16.748285+0000 osd.2 (osd.2) 7412 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7411) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:15.705863+0000 osd.2 (osd.2) 7411 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4587> 2026-01-22T15:39:17.779+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:47.924881+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7413 sent 7412 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:17.781359+0000 osd.2 (osd.2) 7413 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7412) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:16.748285+0000 osd.2 (osd.2) 7412 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7413) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:17.781359+0000 osd.2 (osd.2) 7413 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4574> 2026-01-22T15:39:18.805+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:48.925072+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7414 sent 7413 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:18.807213+0000 osd.2 (osd.2) 7414 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7414) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:18.807213+0000 osd.2 (osd.2) 7414 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4560> 2026-01-22T15:39:19.840+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:49.925345+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7415 sent 7414 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:19.842192+0000 osd.2 (osd.2) 7415 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4551> 2026-01-22T15:39:20.848+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:50.925565+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7416 sent 7415 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:20.850060+0000 osd.2 (osd.2) 7416 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7415) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:19.842192+0000 osd.2 (osd.2) 7415 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4540> 2026-01-22T15:39:21.812+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:51.925807+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7417 sent 7416 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:21.813738+0000 osd.2 (osd.2) 7417 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7416) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:20.850060+0000 osd.2 (osd.2) 7416 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7417) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:21.813738+0000 osd.2 (osd.2) 7417 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,14,30,43,70,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4526> 2026-01-22T15:39:22.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:52.925984+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7418 sent 7417 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:22.834421+0000 osd.2 (osd.2) 7418 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7418) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:22.834421+0000 osd.2 (osd.2) 7418 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4515> 2026-01-22T15:39:23.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:53.926160+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7419 sent 7418 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:23.835208+0000 osd.2 (osd.2) 7419 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4503> 2026-01-22T15:39:24.826+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:54.926355+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7420 sent 7419 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:24.828103+0000 osd.2 (osd.2) 7420 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735caf0800 session 0x55735afa90e0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735bf08800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,3,5,14,30,43,70,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7419) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:23.835208+0000 osd.2 (osd.2) 7419 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4489> 2026-01-22T15:39:25.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:55.926587+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7421 sent 7420 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:25.872661+0000 osd.2 (osd.2) 7421 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7420) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:24.828103+0000 osd.2 (osd.2) 7420 : cluster [WRN] 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7421) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:25.872661+0000 osd.2 (osd.2) 7421 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4476> 2026-01-22T15:39:26.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:56.926790+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7422 sent 7421 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:26.922712+0000 osd.2 (osd.2) 7422 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4467> 2026-01-22T15:39:27.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:57.927070+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7423 sent 7422 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:27.909863+0000 osd.2 (osd.2) 7423 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7422) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:26.922712+0000 osd.2 (osd.2) 7422 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:58.927330+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4453> 2026-01-22T15:39:28.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7423) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:27.909863+0000 osd.2 (osd.2) 7423 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:38:59.927465+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7424 sent 7423 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:28.940764+0000 osd.2 (osd.2) 7424 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4439> 2026-01-22T15:39:29.934+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7424) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:28.940764+0000 osd.2 (osd.2) 7424 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,5,14,30,43,70,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:00.927681+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7425 sent 7424 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:29.935230+0000 osd.2 (osd.2) 7425 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4427> 2026-01-22T15:39:30.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7425) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:29.935230+0000 osd.2 (osd.2) 7425 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:01.927889+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7426 sent 7425 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:30.972789+0000 osd.2 (osd.2) 7426 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4416> 2026-01-22T15:39:31.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,5,14,29,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7426) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:30.972789+0000 osd.2 (osd.2) 7426 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:02.928067+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7427 sent 7426 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:31.950211+0000 osd.2 (osd.2) 7427 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4404> 2026-01-22T15:39:32.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:03.928247+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7428 sent 7427 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:32.976276+0000 osd.2 (osd.2) 7428 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4395> 2026-01-22T15:39:33.986+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7427) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:31.950211+0000 osd.2 (osd.2) 7427 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7428) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:32.976276+0000 osd.2 (osd.2) 7428 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,5,14,29,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:04.928477+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7429 sent 7428 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:33.987112+0000 osd.2 (osd.2) 7429 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4378> 2026-01-22T15:39:34.960+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7429) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:33.987112+0000 osd.2 (osd.2) 7429 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4373> 2026-01-22T15:39:35.924+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:05.928653+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7431 sent 7429 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:34.960717+0000 osd.2 (osd.2) 7430 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:35.924496+0000 osd.2 (osd.2) 7431 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:06.928803+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4360> 2026-01-22T15:39:36.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7431) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:34.960717+0000 osd.2 (osd.2) 7430 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:35.924496+0000 osd.2 (osd.2) 7431 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:07.928981+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7432 sent 7431 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:36.937674+0000 osd.2 (osd.2) 7432 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4348> 2026-01-22T15:39:37.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:08.929184+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7433 sent 7432 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:37.972534+0000 osd.2 (osd.2) 7433 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4339> 2026-01-22T15:39:39.002+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7432) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:36.937674+0000 osd.2 (osd.2) 7432 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7433) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:37.972534+0000 osd.2 (osd.2) 7433 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:09.929383+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7434 sent 7433 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:39.002519+0000 osd.2 (osd.2) 7434 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4323> 2026-01-22T15:39:39.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7434) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:39.002519+0000 osd.2 (osd.2) 7434 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c33e800 session 0x55735cea8960
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735bf09000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3,0,0,0,0,1,0,0,0,8,14,29,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:10.929582+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7435 sent 7434 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:39.976095+0000 osd.2 (osd.2) 7435 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4309> 2026-01-22T15:39:40.997+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7435) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:39.976095+0000 osd.2 (osd.2) 7435 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:11.929762+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7436 sent 7435 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:40.997702+0000 osd.2 (osd.2) 7436 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4298> 2026-01-22T15:39:41.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7436) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:40.997702+0000 osd.2 (osd.2) 7436 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:12.929949+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7437 sent 7436 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:41.965960+0000 osd.2 (osd.2) 7437 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4287> 2026-01-22T15:39:42.932+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7437) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:41.965960+0000 osd.2 (osd.2) 7437 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:13.930140+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7438 sent 7437 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:42.933928+0000 osd.2 (osd.2) 7438 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4276> 2026-01-22T15:39:43.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7438) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:42.933928+0000 osd.2 (osd.2) 7438 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:14.930334+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7439 sent 7438 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:43.940906+0000 osd.2 (osd.2) 7439 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4262> 2026-01-22T15:39:44.944+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,8,13,30,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7439) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:43.940906+0000 osd.2 (osd.2) 7439 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:15.930572+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7440 sent 7439 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:44.946383+0000 osd.2 (osd.2) 7440 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4250> 2026-01-22T15:39:45.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7440) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:44.946383+0000 osd.2 (osd.2) 7440 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:16.930826+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7441 sent 7440 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:45.944628+0000 osd.2 (osd.2) 7441 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4239> 2026-01-22T15:39:46.966+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7441) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:45.944628+0000 osd.2 (osd.2) 7441 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,8,10,33,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:17.931122+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7442 sent 7441 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:46.968066+0000 osd.2 (osd.2) 7442 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4227> 2026-01-22T15:39:47.948+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,8,10,33,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7442) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:46.968066+0000 osd.2 (osd.2) 7442 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:18.931531+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7443 sent 7442 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:47.950061+0000 osd.2 (osd.2) 7443 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4215> 2026-01-22T15:39:48.933+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7443) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:47.950061+0000 osd.2 (osd.2) 7443 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4207> 2026-01-22T15:39:49.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:19.931850+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7445 sent 7443 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:48.934995+0000 osd.2 (osd.2) 7444 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:49.896663+0000 osd.2 (osd.2) 7445 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7445) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:48.934995+0000 osd.2 (osd.2) 7444 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:49.896663+0000 osd.2 (osd.2) 7445 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4194> 2026-01-22T15:39:50.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:20.932021+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7446 sent 7445 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:50.873277+0000 osd.2 (osd.2) 7446 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7446) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:50.873277+0000 osd.2 (osd.2) 7446 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4183> 2026-01-22T15:39:51.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:21.932178+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7447 sent 7446 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:51.905816+0000 osd.2 (osd.2) 7447 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,8,10,33,43,71,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7447) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:51.905816+0000 osd.2 (osd.2) 7447 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4171> 2026-01-22T15:39:52.903+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:22.932390+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7448 sent 7447 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:52.904954+0000 osd.2 (osd.2) 7448 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7448) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:52.904954+0000 osd.2 (osd.2) 7448 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4160> 2026-01-22T15:39:53.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:23.932615+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7449 sent 7448 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:53.895153+0000 osd.2 (osd.2) 7449 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7449) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:53.895153+0000 osd.2 (osd.2) 7449 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4146> 2026-01-22T15:39:54.884+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:24.932804+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7450 sent 7449 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:54.885603+0000 osd.2 (osd.2) 7450 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,8,10,30,45,72,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7450) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:54.885603+0000 osd.2 (osd.2) 7450 : cluster [WRN] 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4134> 2026-01-22T15:39:55.880+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:25.932982+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7451 sent 7450 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:55.882025+0000 osd.2 (osd.2) 7451 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4125> 2026-01-22T15:39:56.860+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:26.933157+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7452 sent 7451 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:56.861955+0000 osd.2 (osd.2) 7452 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7451) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:55.882025+0000 osd.2 (osd.2) 7451 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,8,10,22,53,72,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4113> 2026-01-22T15:39:57.857+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:27.933369+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7453 sent 7452 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:57.858987+0000 osd.2 (osd.2) 7453 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7452) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:56.861955+0000 osd.2 (osd.2) 7452 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7453) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:57.858987+0000 osd.2 (osd.2) 7453 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4100> 2026-01-22T15:39:58.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 15:45:44 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3267566899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:28.933547+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7454 sent 7453 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:58.858065+0000 osd.2 (osd.2) 7454 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7454) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:58.858065+0000 osd.2 (osd.2) 7454 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4086> 2026-01-22T15:39:59.814+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:29.933748+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7455 sent 7454 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:39:59.815549+0000 osd.2 (osd.2) 7455 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7455) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:39:59.815549+0000 osd.2 (osd.2) 7455 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4075> 2026-01-22T15:40:00.823+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:30.933924+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7456 sent 7455 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:00.824648+0000 osd.2 (osd.2) 7456 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4065> 2026-01-22T15:40:01.867+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:31.934106+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7457 sent 7456 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:01.868980+0000 osd.2 (osd.2) 7457 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7456) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:00.824648+0000 osd.2 (osd.2) 7456 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4054> 2026-01-22T15:40:02.897+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:32.934301+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7458 sent 7457 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:02.898613+0000 osd.2 (osd.2) 7458 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7457) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:01.868980+0000 osd.2 (osd.2) 7457 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7458) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:02.898613+0000 osd.2 (osd.2) 7458 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4040> 2026-01-22T15:40:03.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:33.934512+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7459 sent 7458 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:03.854845+0000 osd.2 (osd.2) 7459 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7459) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:03.854845+0000 osd.2 (osd.2) 7459 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4026> 2026-01-22T15:40:04.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:34.934673+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7460 sent 7459 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:04.861020+0000 osd.2 (osd.2) 7460 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7460) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:04.861020+0000 osd.2 (osd.2) 7460 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4013> 2026-01-22T15:40:05.877+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:35.934840+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7461 sent 7460 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:05.878056+0000 osd.2 (osd.2) 7461 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7461) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:05.878056+0000 osd.2 (osd.2) 7461 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -4002> 2026-01-22T15:40:06.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:36.934984+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7462 sent 7461 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:06.928301+0000 osd.2 (osd.2) 7462 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7462) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:06.928301+0000 osd.2 (osd.2) 7462 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:37.935124+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3987> 2026-01-22T15:40:07.959+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:38.935253+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7463 sent 7462 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:07.960102+0000 osd.2 (osd.2) 7463 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3978> 2026-01-22T15:40:08.963+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7463) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:07.960102+0000 osd.2 (osd.2) 7463 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:39.935469+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7464 sent 7463 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:08.964140+0000 osd.2 (osd.2) 7464 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3964> 2026-01-22T15:40:09.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7464) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:08.964140+0000 osd.2 (osd.2) 7464 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:40.935886+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7465 sent 7464 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:09.974872+0000 osd.2 (osd.2) 7465 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3953> 2026-01-22T15:40:10.994+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,9,22,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7465) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:09.974872+0000 osd.2 (osd.2) 7465 : cluster [WRN] 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:41.936083+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7466 sent 7465 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:10.994464+0000 osd.2 (osd.2) 7466 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3941> 2026-01-22T15:40:11.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7466) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:10.994464+0000 osd.2 (osd.2) 7466 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3936> 2026-01-22T15:40:12.912+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:42.936302+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7468 sent 7466 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:11.958628+0000 osd.2 (osd.2) 7467 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:12.913153+0000 osd.2 (osd.2) 7468 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3926> 2026-01-22T15:40:13.935+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:43.936574+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7469 sent 7468 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:13.935704+0000 osd.2 (osd.2) 7469 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7468) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:11.958628+0000 osd.2 (osd.2) 7467 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:12.913153+0000 osd.2 (osd.2) 7468 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7469) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:13.935704+0000 osd.2 (osd.2) 7469 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:44.936765+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3906> 2026-01-22T15:40:14.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:45.936921+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7470 sent 7469 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:14.975876+0000 osd.2 (osd.2) 7470 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3897> 2026-01-22T15:40:15.957+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7470) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:14.975876+0000 osd.2 (osd.2) 7470 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,5,26,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:46.937160+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7471 sent 7470 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:15.957429+0000 osd.2 (osd.2) 7471 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3885> 2026-01-22T15:40:16.998+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7471) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:15.957429+0000 osd.2 (osd.2) 7471 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:47.937358+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7472 sent 7471 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:16.999144+0000 osd.2 (osd.2) 7472 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3874> 2026-01-22T15:40:18.001+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7472) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:16.999144+0000 osd.2 (osd.2) 7472 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:48.937562+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7473 sent 7472 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:18.001912+0000 osd.2 (osd.2) 7473 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3863> 2026-01-22T15:40:19.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:49.937812+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7474 sent 7473 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:19.032798+0000 osd.2 (osd.2) 7474 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7473) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:18.001912+0000 osd.2 (osd.2) 7473 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7474) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:19.032798+0000 osd.2 (osd.2) 7474 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3847> 2026-01-22T15:40:19.996+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,7,6,26,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:50.938050+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7475 sent 7474 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:19.998539+0000 osd.2 (osd.2) 7475 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3837> 2026-01-22T15:40:20.987+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7475) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:19.998539+0000 osd.2 (osd.2) 7475 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:51.938240+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7476 sent 7475 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:20.989010+0000 osd.2 (osd.2) 7476 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3826> 2026-01-22T15:40:21.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7476) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:20.989010+0000 osd.2 (osd.2) 7476 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:52.938465+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7477 sent 7476 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:21.985481+0000 osd.2 (osd.2) 7477 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3815> 2026-01-22T15:40:22.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7477) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:21.985481+0000 osd.2 (osd.2) 7477 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3810> 2026-01-22T15:40:23.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:53.938732+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7479 sent 7477 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:22.942252+0000 osd.2 (osd.2) 7478 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:23.938398+0000 osd.2 (osd.2) 7479 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7479) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:22.942252+0000 osd.2 (osd.2) 7478 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:23.938398+0000 osd.2 (osd.2) 7479 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3794> 2026-01-22T15:40:24.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,7,6,26,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:54.938987+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7480 sent 7479 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:24.893199+0000 osd.2 (osd.2) 7480 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7480) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:24.893199+0000 osd.2 (osd.2) 7480 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3782> 2026-01-22T15:40:25.892+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:55.939259+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7481 sent 7480 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:25.893996+0000 osd.2 (osd.2) 7481 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7481) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:25.893996+0000 osd.2 (osd.2) 7481 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3771> 2026-01-22T15:40:26.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:56.939472+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7482 sent 7481 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:26.929039+0000 osd.2 (osd.2) 7482 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7482) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:26.929039+0000 osd.2 (osd.2) 7482 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3760> 2026-01-22T15:40:27.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:57.939685+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7483 sent 7482 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:27.923083+0000 osd.2 (osd.2) 7483 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3751> 2026-01-22T15:40:28.882+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7483) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:27.923083+0000 osd.2 (osd.2) 7483 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:58.940149+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7484 sent 7483 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:28.883577+0000 osd.2 (osd.2) 7484 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7484) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:28.883577+0000 osd.2 (osd.2) 7484 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3735> 2026-01-22T15:40:29.929+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:39:59.940283+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7485 sent 7484 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:29.930970+0000 osd.2 (osd.2) 7485 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,7,6,26,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7485) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:29.930970+0000 osd.2 (osd.2) 7485 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:00.940523+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3720> 2026-01-22T15:40:30.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:01.940693+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7486 sent 7485 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:30.945025+0000 osd.2 (osd.2) 7486 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3711> 2026-01-22T15:40:31.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7486) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:30.945025+0000 osd.2 (osd.2) 7486 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3706> 2026-01-22T15:40:32.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:02.940900+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7488 sent 7486 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:31.952900+0000 osd.2 (osd.2) 7487 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:32.905729+0000 osd.2 (osd.2) 7488 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7488) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:31.952900+0000 osd.2 (osd.2) 7487 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:32.905729+0000 osd.2 (osd.2) 7488 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:03.941154+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3690> 2026-01-22T15:40:33.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3684> 2026-01-22T15:40:34.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,6,26,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:04.941344+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7490 sent 7488 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:33.949386+0000 osd.2 (osd.2) 7489 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:34.927210+0000 osd.2 (osd.2) 7490 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7490) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:33.949386+0000 osd.2 (osd.2) 7489 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:34.927210+0000 osd.2 (osd.2) 7490 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3670> 2026-01-22T15:40:35.926+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:05.941526+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7491 sent 7490 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:35.928423+0000 osd.2 (osd.2) 7491 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7491) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:35.928423+0000 osd.2 (osd.2) 7491 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,6,26,54,71,31])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3658> 2026-01-22T15:40:36.911+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:06.941696+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7492 sent 7491 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:36.913251+0000 osd.2 (osd.2) 7492 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7492) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:36.913251+0000 osd.2 (osd.2) 7492 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3647> 2026-01-22T15:40:37.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:07.941944+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7493 sent 7492 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:37.892899+0000 osd.2 (osd.2) 7493 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7493) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:37.892899+0000 osd.2 (osd.2) 7493 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3636> 2026-01-22T15:40:38.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:08.942178+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7494 sent 7493 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:38.861547+0000 osd.2 (osd.2) 7494 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7494) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:38.861547+0000 osd.2 (osd.2) 7494 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3620> 2026-01-22T15:40:39.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:09.942420+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7495 sent 7494 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:39.855343+0000 osd.2 (osd.2) 7495 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7495) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:39.855343+0000 osd.2 (osd.2) 7495 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3609> 2026-01-22T15:40:40.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:10.942593+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7496 sent 7495 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:40.876294+0000 osd.2 (osd.2) 7496 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3600> 2026-01-22T15:40:41.888+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:11.942757+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7497 sent 7496 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:41.889680+0000 osd.2 (osd.2) 7497 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7496) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:40.876294+0000 osd.2 (osd.2) 7496 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3589> 2026-01-22T15:40:42.870+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:12.942993+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7498 sent 7497 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:42.871473+0000 osd.2 (osd.2) 7498 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7497) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:41.889680+0000 osd.2 (osd.2) 7497 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7498) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:42.871473+0000 osd.2 (osd.2) 7498 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3576> 2026-01-22T15:40:43.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:13.943222+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7499 sent 7498 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:43.875075+0000 osd.2 (osd.2) 7499 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3563> 2026-01-22T15:40:44.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7499) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:43.875075+0000 osd.2 (osd.2) 7499 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:14.943466+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7500 sent 7499 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:44.902594+0000 osd.2 (osd.2) 7500 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3552> 2026-01-22T15:40:45.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:15.943608+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7501 sent 7500 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:45.902805+0000 osd.2 (osd.2) 7501 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7500) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:44.902594+0000 osd.2 (osd.2) 7500 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3540> 2026-01-22T15:40:46.913+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:16.943826+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7502 sent 7501 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:46.914425+0000 osd.2 (osd.2) 7502 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7501) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:45.902805+0000 osd.2 (osd.2) 7501 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7502) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:46.914425+0000 osd.2 (osd.2) 7502 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3527> 2026-01-22T15:40:47.907+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:17.944071+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7503 sent 7502 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:47.907646+0000 osd.2 (osd.2) 7503 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3517> 2026-01-22T15:40:48.868+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7503) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:47.907646+0000 osd.2 (osd.2) 7503 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:18.944352+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7504 sent 7503 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:48.868880+0000 osd.2 (osd.2) 7504 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3502> 2026-01-22T15:40:49.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:19.944623+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7505 sent 7504 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:49.909507+0000 osd.2 (osd.2) 7505 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7504) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:48.868880+0000 osd.2 (osd.2) 7504 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3491> 2026-01-22T15:40:50.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:20.945168+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7506 sent 7505 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:50.920888+0000 osd.2 (osd.2) 7506 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7505) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:49.909507+0000 osd.2 (osd.2) 7505 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7506) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:50.920888+0000 osd.2 (osd.2) 7506 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:21.945442+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3475> 2026-01-22T15:40:51.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:22.945598+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7507 sent 7506 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:51.952033+0000 osd.2 (osd.2) 7507 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3465> 2026-01-22T15:40:52.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7507) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:51.952033+0000 osd.2 (osd.2) 7507 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:23.945835+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7508 sent 7507 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:52.946626+0000 osd.2 (osd.2) 7508 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3454> 2026-01-22T15:40:53.953+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7508) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:52.946626+0000 osd.2 (osd.2) 7508 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3446> 2026-01-22T15:40:54.915+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:24.946055+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7510 sent 7508 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:53.954028+0000 osd.2 (osd.2) 7509 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:54.916690+0000 osd.2 (osd.2) 7510 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7510) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:53.954028+0000 osd.2 (osd.2) 7509 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:54.916690+0000 osd.2 (osd.2) 7510 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3433> 2026-01-22T15:40:55.918+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:25.946246+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7511 sent 7510 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:55.919536+0000 osd.2 (osd.2) 7511 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7511) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:55.919536+0000 osd.2 (osd.2) 7511 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3421> 2026-01-22T15:40:56.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:26.946456+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7512 sent 7511 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:56.921117+0000 osd.2 (osd.2) 7512 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7512) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:56.921117+0000 osd.2 (osd.2) 7512 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3410> 2026-01-22T15:40:57.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:27.946669+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7513 sent 7512 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:57.919145+0000 osd.2 (osd.2) 7513 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7513) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:57.919145+0000 osd.2 (osd.2) 7513 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:28.946906+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3396> 2026-01-22T15:40:58.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:29.947058+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7514 sent 7513 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:58.966789+0000 osd.2 (osd.2) 7514 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3384> 2026-01-22T15:40:59.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7514) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:58.966789+0000 osd.2 (osd.2) 7514 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:30.947287+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7515 sent 7514 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:40:59.951010+0000 osd.2 (osd.2) 7515 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3372> 2026-01-22T15:41:00.947+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3369> 2026-01-22T15:41:01.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:31.947514+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7517 sent 7515 num 3 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:00.949247+0000 osd.2 (osd.2) 7516 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:01.918583+0000 osd.2 (osd.2) 7517 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7515) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:40:59.951010+0000 osd.2 (osd.2) 7515 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,6,6,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3356> 2026-01-22T15:41:02.914+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:32.947756+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7518 sent 7517 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:02.915709+0000 osd.2 (osd.2) 7518 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7517) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:00.949247+0000 osd.2 (osd.2) 7516 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:01.918583+0000 osd.2 (osd.2) 7517 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,6,6,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3343> 2026-01-22T15:41:03.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:33.947980+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7519 sent 7518 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:03.926701+0000 osd.2 (osd.2) 7519 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7518) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:02.915709+0000 osd.2 (osd.2) 7518 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7519) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:03.926701+0000 osd.2 (osd.2) 7519 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:34.948137+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3324> 2026-01-22T15:41:04.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:35.948307+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7520 sent 7519 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:04.973438+0000 osd.2 (osd.2) 7520 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3315> 2026-01-22T15:41:05.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7520) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:04.973438+0000 osd.2 (osd.2) 7520 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:36.948547+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7521 sent 7520 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:05.976182+0000 osd.2 (osd.2) 7521 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3304> 2026-01-22T15:41:06.971+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7521) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:05.976182+0000 osd.2 (osd.2) 7521 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,6,6,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:37.948841+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7522 sent 7521 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:06.973267+0000 osd.2 (osd.2) 7522 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3292> 2026-01-22T15:41:07.999+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7522) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:06.973267+0000 osd.2 (osd.2) 7522 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:38.949016+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7523 sent 7522 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:08.001263+0000 osd.2 (osd.2) 7523 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3280> 2026-01-22T15:41:08.985+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:39.950284+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7524 sent 7523 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:08.987155+0000 osd.2 (osd.2) 7524 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3268> 2026-01-22T15:41:09.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7523) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:08.001263+0000 osd.2 (osd.2) 7523 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:40.951488+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7525 sent 7524 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:09.996891+0000 osd.2 (osd.2) 7525 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3257> 2026-01-22T15:41:11.039+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7524) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:08.987155+0000 osd.2 (osd.2) 7524 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7525) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:09.996891+0000 osd.2 (osd.2) 7525 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:41.952350+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7526 sent 7525 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:11.041419+0000 osd.2 (osd.2) 7526 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3244> 2026-01-22T15:41:11.993+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7526) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:11.041419+0000 osd.2 (osd.2) 7526 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:42.953073+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7527 sent 7526 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:11.995141+0000 osd.2 (osd.2) 7527 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3232> 2026-01-22T15:41:13.025+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7527) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:11.995141+0000 osd.2 (osd.2) 7527 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:43.953675+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7528 sent 7527 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:13.026459+0000 osd.2 (osd.2) 7528 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3221> 2026-01-22T15:41:14.072+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7528) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:13.026459+0000 osd.2 (osd.2) 7528 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:44.954093+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7529 sent 7528 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:14.073383+0000 osd.2 (osd.2) 7529 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3207> 2026-01-22T15:41:15.119+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7529) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:14.073383+0000 osd.2 (osd.2) 7529 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:45.954385+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7530 sent 7529 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:15.121203+0000 osd.2 (osd.2) 7530 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3196> 2026-01-22T15:41:16.115+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7530) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:15.121203+0000 osd.2 (osd.2) 7530 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:46.954975+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7531 sent 7530 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:16.117244+0000 osd.2 (osd.2) 7531 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3184> 2026-01-22T15:41:17.147+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7531) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:16.117244+0000 osd.2 (osd.2) 7531 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:47.955412+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7532 sent 7531 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:17.149176+0000 osd.2 (osd.2) 7532 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3173> 2026-01-22T15:41:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7532) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:17.149176+0000 osd.2 (osd.2) 7532 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:48.955722+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7533 sent 7532 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:18.114160+0000 osd.2 (osd.2) 7533 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3162> 2026-01-22T15:41:19.131+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7533) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:18.114160+0000 osd.2 (osd.2) 7533 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:49.956511+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7534 sent 7533 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:19.131959+0000 osd.2 (osd.2) 7534 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3148> 2026-01-22T15:41:20.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7534) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:19.131959+0000 osd.2 (osd.2) 7534 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:50.957050+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7535 sent 7534 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:20.120924+0000 osd.2 (osd.2) 7535 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3137> 2026-01-22T15:41:21.077+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7535) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:20.120924+0000 osd.2 (osd.2) 7535 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:51.957779+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7536 sent 7535 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:21.078064+0000 osd.2 (osd.2) 7536 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3126> 2026-01-22T15:41:22.065+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7536) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:21.078064+0000 osd.2 (osd.2) 7536 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:52.958214+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7537 sent 7536 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:22.065619+0000 osd.2 (osd.2) 7537 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3114> 2026-01-22T15:41:23.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7537) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:22.065619+0000 osd.2 (osd.2) 7537 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:53.958675+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7538 sent 7537 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:23.042139+0000 osd.2 (osd.2) 7538 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3103> 2026-01-22T15:41:24.073+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7538) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:23.042139+0000 osd.2 (osd.2) 7538 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:54.958910+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7539 sent 7538 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:24.074281+0000 osd.2 (osd.2) 7539 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3089> 2026-01-22T15:41:25.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7539) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:24.074281+0000 osd.2 (osd.2) 7539 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:55.959247+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7540 sent 7539 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:25.090906+0000 osd.2 (osd.2) 7540 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3078> 2026-01-22T15:41:26.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3075> 2026-01-22T15:41:27.061+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:57.115265+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7542 sent 7540 num 3 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:26.051098+0000 osd.2 (osd.2) 7541 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:27.062732+0000 osd.2 (osd.2) 7542 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7540) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:25.090906+0000 osd.2 (osd.2) 7540 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3062> 2026-01-22T15:41:28.016+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:58.115645+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 3 last_log 7543 sent 7542 num 3 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:28.017602+0000 osd.2 (osd.2) 7543 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7542) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:26.051098+0000 osd.2 (osd.2) 7541 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:27.062732+0000 osd.2 (osd.2) 7542 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7543) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:28.017602+0000 osd.2 (osd.2) 7543 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3048> 2026-01-22T15:41:29.057+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:40:59.116023+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7544 sent 7543 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:29.057787+0000 osd.2 (osd.2) 7544 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7544) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:29.057787+0000 osd.2 (osd.2) 7544 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3034> 2026-01-22T15:41:30.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:00.116285+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7545 sent 7544 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:30.045071+0000 osd.2 (osd.2) 7545 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7545) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:30.045071+0000 osd.2 (osd.2) 7545 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3022> 2026-01-22T15:41:31.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:01.116544+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7546 sent 7545 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:31.042730+0000 osd.2 (osd.2) 7546 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3013> 2026-01-22T15:41:31.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7546) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:31.042730+0000 osd.2 (osd.2) 7546 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:02.116770+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7547 sent 7546 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:31.996780+0000 osd.2 (osd.2) 7547 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7547) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:31.996780+0000 osd.2 (osd.2) 7547 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -3000> 2026-01-22T15:41:33.015+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:03.116995+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7548 sent 7547 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:33.017777+0000 osd.2 (osd.2) 7548 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7548) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:33.017777+0000 osd.2 (osd.2) 7548 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2989> 2026-01-22T15:41:34.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:04.117191+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7549 sent 7548 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:34.045694+0000 osd.2 (osd.2) 7549 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2975> 2026-01-22T15:41:35.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:05.117370+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7550 sent 7549 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:35.083210+0000 osd.2 (osd.2) 7550 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7549) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:34.045694+0000 osd.2 (osd.2) 7549 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2964> 2026-01-22T15:41:36.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:06.117552+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7551 sent 7550 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:36.065589+0000 osd.2 (osd.2) 7551 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7550) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:35.083210+0000 osd.2 (osd.2) 7550 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7551) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:36.065589+0000 osd.2 (osd.2) 7551 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2951> 2026-01-22T15:41:37.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:07.117775+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7552 sent 7551 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:37.072031+0000 osd.2 (osd.2) 7552 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7552) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:37.072031+0000 osd.2 (osd.2) 7552 : cluster [WRN] 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735a80b800 session 0x55735c6230e0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5ccc00
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2938> 2026-01-22T15:41:38.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:08.118012+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7553 sent 7552 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:38.083019+0000 osd.2 (osd.2) 7553 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7553) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:38.083019+0000 osd.2 (osd.2) 7553 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,5,0,0,0,0,0,0,3,4,8,28,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2926> 2026-01-22T15:41:39.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:09.118249+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7554 sent 7553 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:39.082010+0000 osd.2 (osd.2) 7554 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7554) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:39.082010+0000 osd.2 (osd.2) 7554 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2912> 2026-01-22T15:41:40.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:10.118534+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7555 sent 7554 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:40.051437+0000 osd.2 (osd.2) 7555 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7555) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:40.051437+0000 osd.2 (osd.2) 7555 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,0,0,3,4,8,28,53,67,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2900> 2026-01-22T15:41:41.056+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:11.119011+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7556 sent 7555 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:41.057890+0000 osd.2 (osd.2) 7556 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7556) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:41.057890+0000 osd.2 (osd.2) 7556 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2889> 2026-01-22T15:41:42.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:12.119282+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7557 sent 7556 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:42.065958+0000 osd.2 (osd.2) 7557 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7557) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:42.065958+0000 osd.2 (osd.2) 7557 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2878> 2026-01-22T15:41:43.017+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:13.119526+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7558 sent 7557 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:43.018859+0000 osd.2 (osd.2) 7558 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7558) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:43.018859+0000 osd.2 (osd.2) 7558 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,0,3,4,8,24,55,69,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2866> 2026-01-22T15:41:44.063+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:14.119755+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7559 sent 7558 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:44.064637+0000 osd.2 (osd.2) 7559 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7559) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:44.064637+0000 osd.2 (osd.2) 7559 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2852> 2026-01-22T15:41:45.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:15.119983+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7560 sent 7559 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:45.097460+0000 osd.2 (osd.2) 7560 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7560) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:45.097460+0000 osd.2 (osd.2) 7560 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2841> 2026-01-22T15:41:46.102+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:16.120664+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7561 sent 7560 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:46.103722+0000 osd.2 (osd.2) 7561 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2832> 2026-01-22T15:41:47.101+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7561) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:46.103722+0000 osd.2 (osd.2) 7561 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:17.120877+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7562 sent 7561 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:47.102694+0000 osd.2 (osd.2) 7562 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,3,4,8,24,55,69,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2820> 2026-01-22T15:41:48.076+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:18.121068+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7563 sent 7562 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:48.078043+0000 osd.2 (osd.2) 7563 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7562) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:47.102694+0000 osd.2 (osd.2) 7562 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c5ce400 session 0x55735a319860
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5ca400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2807> 2026-01-22T15:41:49.104+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:19.121256+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7564 sent 7563 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:49.105649+0000 osd.2 (osd.2) 7564 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7563) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:48.078043+0000 osd.2 (osd.2) 7563 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7564) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:49.105649+0000 osd.2 (osd.2) 7564 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2791> 2026-01-22T15:41:50.093+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:20.121578+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7565 sent 7564 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:50.094817+0000 osd.2 (osd.2) 7565 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7565) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:50.094817+0000 osd.2 (osd.2) 7565 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,5,0,0,3,1,11,24,55,69,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2779> 2026-01-22T15:41:51.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:21.121822+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7566 sent 7565 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:51.060883+0000 osd.2 (osd.2) 7566 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7566) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:51.060883+0000 osd.2 (osd.2) 7566 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2768> 2026-01-22T15:41:52.098+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:22.122010+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7567 sent 7566 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:52.099971+0000 osd.2 (osd.2) 7567 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7567) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:52.099971+0000 osd.2 (osd.2) 7567 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2757> 2026-01-22T15:41:53.071+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:23.122181+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7568 sent 7567 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:53.073286+0000 osd.2 (osd.2) 7568 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7568) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:53.073286+0000 osd.2 (osd.2) 7568 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,0,4,11,23,56,69,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2745> 2026-01-22T15:41:54.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:24.122363+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7569 sent 7568 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:54.031712+0000 osd.2 (osd.2) 7569 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7569) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:54.031712+0000 osd.2 (osd.2) 7569 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2731> 2026-01-22T15:41:55.042+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:25.122544+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7570 sent 7569 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:55.043526+0000 osd.2 (osd.2) 7570 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7570) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:55.043526+0000 osd.2 (osd.2) 7570 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2720> 2026-01-22T15:41:56.075+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:26.122724+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7571 sent 7570 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:56.075673+0000 osd.2 (osd.2) 7571 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,5,0,0,4,11,23,56,69,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7571) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:56.075673+0000 osd.2 (osd.2) 7571 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2708> 2026-01-22T15:41:57.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:27.122906+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7572 sent 7571 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:57.120794+0000 osd.2 (osd.2) 7572 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7572) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:57.120794+0000 osd.2 (osd.2) 7572 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:28.123120+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2694> 2026-01-22T15:41:58.125+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,4,11,23,56,69,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:29.123286+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7573 sent 7572 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:58.126169+0000 osd.2 (osd.2) 7573 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2684> 2026-01-22T15:41:59.126+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7573) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:58.126169+0000 osd.2 (osd.2) 7573 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:30.123645+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7574 sent 7573 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:41:59.127168+0000 osd.2 (osd.2) 7574 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2670> 2026-01-22T15:42:00.141+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,4,11,23,55,70,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7574) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:41:59.127168+0000 osd.2 (osd.2) 7574 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2664> 2026-01-22T15:42:01.100+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735a6e5c00 session 0x55735d390f00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c7cb000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:31.124183+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7576 sent 7574 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:00.141580+0000 osd.2 (osd.2) 7575 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:01.100993+0000 osd.2 (osd.2) 7576 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7576) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:00.141580+0000 osd.2 (osd.2) 7575 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:01.100993+0000 osd.2 (osd.2) 7576 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2649> 2026-01-22T15:42:02.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:32.124395+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7577 sent 7576 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:02.095932+0000 osd.2 (osd.2) 7577 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7577) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:02.095932+0000 osd.2 (osd.2) 7577 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:33.124669+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2635> 2026-01-22T15:42:03.134+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:34.124823+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7578 sent 7577 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:03.134694+0000 osd.2 (osd.2) 7578 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2626> 2026-01-22T15:42:04.171+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7578) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:03.134694+0000 osd.2 (osd.2) 7578 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:35.125072+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7579 sent 7578 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:04.171659+0000 osd.2 (osd.2) 7579 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2612> 2026-01-22T15:42:05.150+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7579) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:04.171659+0000 osd.2 (osd.2) 7579 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,0,0,4,11,23,55,70,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2606> 2026-01-22T15:42:06.108+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:36.125243+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7581 sent 7579 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:05.150737+0000 osd.2 (osd.2) 7580 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:06.108697+0000 osd.2 (osd.2) 7581 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7581) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:05.150737+0000 osd.2 (osd.2) 7580 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:06.108697+0000 osd.2 (osd.2) 7581 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2593> 2026-01-22T15:42:07.099+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:37.125456+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7582 sent 7581 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:07.100669+0000 osd.2 (osd.2) 7582 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,0,0,4,11,23,55,70,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7582) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:07.100669+0000 osd.2 (osd.2) 7582 : cluster [WRN] 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:38.125642+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2578> 2026-01-22T15:42:08.145+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:39.125776+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7583 sent 7582 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:08.146612+0000 osd.2 (osd.2) 7583 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2569> 2026-01-22T15:42:09.178+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7583) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:08.146612+0000 osd.2 (osd.2) 7583 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:40.125978+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7584 sent 7583 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:09.179616+0000 osd.2 (osd.2) 7584 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2555> 2026-01-22T15:42:10.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7584) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:09.179616+0000 osd.2 (osd.2) 7584 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:41.126264+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7585 sent 7584 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:10.177651+0000 osd.2 (osd.2) 7585 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2544> 2026-01-22T15:42:11.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7585) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:10.177651+0000 osd.2 (osd.2) 7585 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,0,4,11,20,58,70,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:42.126524+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7586 sent 7585 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:11.131026+0000 osd.2 (osd.2) 7586 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2532> 2026-01-22T15:42:12.157+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:43.126810+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7587 sent 7586 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:12.158467+0000 osd.2 (osd.2) 7587 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2523> 2026-01-22T15:42:13.191+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7586) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:11.131026+0000 osd.2 (osd.2) 7586 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,0,4,11,20,58,70,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:44.127012+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7588 sent 7587 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:13.192065+0000 osd.2 (osd.2) 7588 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2511> 2026-01-22T15:42:14.160+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7587) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:12.158467+0000 osd.2 (osd.2) 7587 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7588) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:13.192065+0000 osd.2 (osd.2) 7588 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2501> 2026-01-22T15:42:15.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:45.127262+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7590 sent 7588 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:14.161143+0000 osd.2 (osd.2) 7589 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:15.121854+0000 osd.2 (osd.2) 7590 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7590) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:14.161143+0000 osd.2 (osd.2) 7589 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:15.121854+0000 osd.2 (osd.2) 7590 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2488> 2026-01-22T15:42:16.107+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:46.127613+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7591 sent 7590 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:16.108946+0000 osd.2 (osd.2) 7591 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7591) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:16.108946+0000 osd.2 (osd.2) 7591 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:47.130764+0000)
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2474> 2026-01-22T15:42:17.149+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2471> 2026-01-22T15:42:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:48.131262+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7593 sent 7591 num 2 unsent 2 sending 2
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:17.151010+0000 osd.2 (osd.2) 7592 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:18.113582+0000 osd.2 (osd.2) 7593 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,5,0,4,11,20,58,70,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7593) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:17.151010+0000 osd.2 (osd.2) 7592 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:18.113582+0000 osd.2 (osd.2) 7593 : cluster [WRN] 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2457> 2026-01-22T15:42:19.127+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:49.131483+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7594 sent 7593 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:19.128521+0000 osd.2 (osd.2) 7594 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,5,0,4,11,20,58,70,36])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2444> 2026-01-22T15:42:20.128+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:50.131652+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7595 sent 7594 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:20.129555+0000 osd.2 (osd.2) 7595 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7594) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:19.128521+0000 osd.2 (osd.2) 7594 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2433> 2026-01-22T15:42:21.123+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:51.132396+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7596 sent 7595 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:21.124746+0000 osd.2 (osd.2) 7596 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7595) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:20.129555+0000 osd.2 (osd.2) 7595 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7596) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:21.124746+0000 osd.2 (osd.2) 7596 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2420> 2026-01-22T15:42:22.117+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:52.132890+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7597 sent 7596 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:22.118510+0000 osd.2 (osd.2) 7597 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7597) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:22.118510+0000 osd.2 (osd.2) 7597 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2409> 2026-01-22T15:42:23.111+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:53.133411+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7598 sent 7597 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:23.112796+0000 osd.2 (osd.2) 7598 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7598) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:23.112796+0000 osd.2 (osd.2) 7598 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:54.133923+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2394> 2026-01-22T15:42:24.161+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:55.134081+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7599 sent 7598 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:24.162455+0000 osd.2 (osd.2) 7599 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2382> 2026-01-22T15:42:25.204+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,0,4,11,20,55,73,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7599) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:24.162455+0000 osd.2 (osd.2) 7599 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:56.134551+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7600 sent 7599 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:25.206034+0000 osd.2 (osd.2) 7600 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2370> 2026-01-22T15:42:26.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7600) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:25.206034+0000 osd.2 (osd.2) 7600 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:57.134755+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7601 sent 7600 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:26.182239+0000 osd.2 (osd.2) 7601 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2359> 2026-01-22T15:42:27.211+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c5fb400 session 0x55735b6a8000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5fa400
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7601) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:26.182239+0000 osd.2 (osd.2) 7601 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:58.135011+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7602 sent 7601 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:27.213089+0000 osd.2 (osd.2) 7602 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2346> 2026-01-22T15:42:28.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,0,4,11,20,55,73,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7602) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:27.213089+0000 osd.2 (osd.2) 7602 : cluster [WRN] 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:41:59.135214+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7603 sent 7602 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:28.182427+0000 osd.2 (osd.2) 7603 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2334> 2026-01-22T15:42:29.192+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7603) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:28.182427+0000 osd.2 (osd.2) 7603 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:00.135693+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7604 sent 7603 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:29.194455+0000 osd.2 (osd.2) 7604 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2320> 2026-01-22T15:42:30.148+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7604) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:29.194455+0000 osd.2 (osd.2) 7604 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:01.136074+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7605 sent 7604 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:30.149754+0000 osd.2 (osd.2) 7605 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2309> 2026-01-22T15:42:31.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7605) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:30.149754+0000 osd.2 (osd.2) 7605 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:02.136305+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7606 sent 7605 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:31.178232+0000 osd.2 (osd.2) 7606 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2298> 2026-01-22T15:42:32.181+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7606) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:31.178232+0000 osd.2 (osd.2) 7606 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:03.136510+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7607 sent 7606 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:32.183385+0000 osd.2 (osd.2) 7607 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2287> 2026-01-22T15:42:33.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:04.136746+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7608 sent 7607 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:33.213414+0000 osd.2 (osd.2) 7608 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7607) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:32.183385+0000 osd.2 (osd.2) 7607 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7608) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:33.213414+0000 osd.2 (osd.2) 7608 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2274> 2026-01-22T15:42:34.244+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,7,0,4,11,20,55,73,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:05.137037+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7609 sent 7608 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:34.244482+0000 osd.2 (osd.2) 7609 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7609) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:34.244482+0000 osd.2 (osd.2) 7609 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2259> 2026-01-22T15:42:35.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:06.137273+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7610 sent 7609 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:35.238639+0000 osd.2 (osd.2) 7610 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7610) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:35.238639+0000 osd.2 (osd.2) 7610 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2248> 2026-01-22T15:42:36.233+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,73,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:07.137564+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7611 sent 7610 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:36.234120+0000 osd.2 (osd.2) 7611 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7611) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:36.234120+0000 osd.2 (osd.2) 7611 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2236> 2026-01-22T15:42:37.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,73,36])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:08.137818+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7612 sent 7611 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:37.242021+0000 osd.2 (osd.2) 7612 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7612) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:37.242021+0000 osd.2 (osd.2) 7612 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2224> 2026-01-22T15:42:38.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:09.138024+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7613 sent 7612 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:38.238713+0000 osd.2 (osd.2) 7613 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2215> 2026-01-22T15:42:39.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7613) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:38.238713+0000 osd.2 (osd.2) 7613 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:10.138291+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7614 sent 7613 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:39.242573+0000 osd.2 (osd.2) 7614 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2201> 2026-01-22T15:42:40.235+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7614) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:39.242573+0000 osd.2 (osd.2) 7614 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:11.138604+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7615 sent 7614 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:40.235572+0000 osd.2 (osd.2) 7615 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2190> 2026-01-22T15:42:41.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7615) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:40.235572+0000 osd.2 (osd.2) 7615 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:12.138848+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7616 sent 7615 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:41.270343+0000 osd.2 (osd.2) 7616 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2178> 2026-01-22T15:42:42.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7616) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:41.270343+0000 osd.2 (osd.2) 7616 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:13.139132+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7617 sent 7616 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:42.269915+0000 osd.2 (osd.2) 7617 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2166> 2026-01-22T15:42:43.282+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7617) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:42.269915+0000 osd.2 (osd.2) 7617 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,5,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:14.139387+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7618 sent 7617 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:43.283183+0000 osd.2 (osd.2) 7618 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2154> 2026-01-22T15:42:44.332+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7618) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:43.283183+0000 osd.2 (osd.2) 7618 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:15.139653+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7619 sent 7618 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:44.333515+0000 osd.2 (osd.2) 7619 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2140> 2026-01-22T15:42:45.295+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7619) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:44.333515+0000 osd.2 (osd.2) 7619 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:16.139924+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7620 sent 7619 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:45.296887+0000 osd.2 (osd.2) 7620 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2128> 2026-01-22T15:42:46.258+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7620) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:45.296887+0000 osd.2 (osd.2) 7620 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:17.140167+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7621 sent 7620 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:46.259508+0000 osd.2 (osd.2) 7621 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2116> 2026-01-22T15:42:47.237+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7621) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:46.259508+0000 osd.2 (osd.2) 7621 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:18.140378+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7622 sent 7621 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:47.239109+0000 osd.2 (osd.2) 7622 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2105> 2026-01-22T15:42:48.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:19.140642+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7623 sent 7622 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:48.224097+0000 osd.2 (osd.2) 7623 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7622) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:47.239109+0000 osd.2 (osd.2) 7622 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2094> 2026-01-22T15:42:49.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:20.140852+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7624 sent 7623 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:49.214614+0000 osd.2 (osd.2) 7624 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7623) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:48.224097+0000 osd.2 (osd.2) 7623 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7624) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:49.214614+0000 osd.2 (osd.2) 7624 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2078> 2026-01-22T15:42:50.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:21.141129+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7625 sent 7624 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:50.265120+0000 osd.2 (osd.2) 7625 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7625) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:50.265120+0000 osd.2 (osd.2) 7625 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2067> 2026-01-22T15:42:51.307+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:22.141402+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7626 sent 7625 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:51.308860+0000 osd.2 (osd.2) 7626 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7626) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:51.308860+0000 osd.2 (osd.2) 7626 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2055> 2026-01-22T15:42:52.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:23.141656+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7627 sent 7626 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:52.330568+0000 osd.2 (osd.2) 7627 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7627) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:52.330568+0000 osd.2 (osd.2) 7627 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2043> 2026-01-22T15:42:53.285+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:24.141849+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7628 sent 7627 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:53.286894+0000 osd.2 (osd.2) 7628 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7628) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:53.286894+0000 osd.2 (osd.2) 7628 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2032> 2026-01-22T15:42:54.317+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,6,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:25.141991+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7629 sent 7628 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:54.319111+0000 osd.2 (osd.2) 7629 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7629) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:54.319111+0000 osd.2 (osd.2) 7629 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2017> 2026-01-22T15:42:55.357+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:26.142241+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7630 sent 7629 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:55.358861+0000 osd.2 (osd.2) 7630 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7630) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:55.358861+0000 osd.2 (osd.2) 7630 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -2006> 2026-01-22T15:42:56.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:27.142578+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7631 sent 7630 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:56.374183+0000 osd.2 (osd.2) 7631 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7631) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:56.374183+0000 osd.2 (osd.2) 7631 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1995> 2026-01-22T15:42:57.339+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:28.142793+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7632 sent 7631 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:57.340925+0000 osd.2 (osd.2) 7632 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7632) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:57.340925+0000 osd.2 (osd.2) 7632 : cluster [WRN] 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1984> 2026-01-22T15:42:58.294+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,6,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:29.143093+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7633 sent 7632 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:58.295980+0000 osd.2 (osd.2) 7633 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1974> 2026-01-22T15:42:59.270+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7633) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:58.295980+0000 osd.2 (osd.2) 7633 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:30.143348+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7634 sent 7633 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:42:59.272136+0000 osd.2 (osd.2) 7634 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1960> 2026-01-22T15:43:00.298+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7634) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:42:59.272136+0000 osd.2 (osd.2) 7634 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:31.143620+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7635 sent 7634 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:00.300281+0000 osd.2 (osd.2) 7635 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1949> 2026-01-22T15:43:01.292+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7635) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:00.300281+0000 osd.2 (osd.2) 7635 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:32.144544+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7636 sent 7635 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:01.294621+0000 osd.2 (osd.2) 7636 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1938> 2026-01-22T15:43:02.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7636) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:01.294621+0000 osd.2 (osd.2) 7636 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:33.144809+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7637 sent 7636 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:02.265695+0000 osd.2 (osd.2) 7637 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1927> 2026-01-22T15:43:03.266+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7637) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:02.265695+0000 osd.2 (osd.2) 7637 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,4,11,20,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:34.145040+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7638 sent 7637 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:03.268545+0000 osd.2 (osd.2) 7638 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1915> 2026-01-22T15:43:04.218+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7638) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:03.268545+0000 osd.2 (osd.2) 7638 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:35.145191+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7639 sent 7638 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:04.219878+0000 osd.2 (osd.2) 7639 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1901> 2026-01-22T15:43:05.195+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7639) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:04.219878+0000 osd.2 (osd.2) 7639 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:36.145389+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7640 sent 7639 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:05.196640+0000 osd.2 (osd.2) 7640 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1890> 2026-01-22T15:43:06.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7640) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:05.196640+0000 osd.2 (osd.2) 7640 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:37.145600+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7641 sent 7640 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:06.224178+0000 osd.2 (osd.2) 7641 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1879> 2026-01-22T15:43:07.257+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7641) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:06.224178+0000 osd.2 (osd.2) 7641 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:38.145851+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7642 sent 7641 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:07.258860+0000 osd.2 (osd.2) 7642 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1868> 2026-01-22T15:43:08.223+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7642) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:07.258860+0000 osd.2 (osd.2) 7642 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:39.146072+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7643 sent 7642 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:08.225382+0000 osd.2 (osd.2) 7643 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1856> 2026-01-22T15:43:09.242+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7643) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:08.225382+0000 osd.2 (osd.2) 7643 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:40.146352+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7644 sent 7643 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:09.244378+0000 osd.2 (osd.2) 7644 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1842> 2026-01-22T15:43:10.234+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7644) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:09.244378+0000 osd.2 (osd.2) 7644 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:41.146702+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7645 sent 7644 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:10.235663+0000 osd.2 (osd.2) 7645 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1831> 2026-01-22T15:43:11.281+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7645) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:10.235663+0000 osd.2 (osd.2) 7645 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:42.146924+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7646 sent 7645 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:11.283278+0000 osd.2 (osd.2) 7646 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1819> 2026-01-22T15:43:12.322+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:43.147123+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7647 sent 7646 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:12.323373+0000 osd.2 (osd.2) 7647 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7646) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:11.283278+0000 osd.2 (osd.2) 7646 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1808> 2026-01-22T15:43:13.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:44.147394+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7648 sent 7647 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:13.329486+0000 osd.2 (osd.2) 7648 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7647) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:12.323373+0000 osd.2 (osd.2) 7647 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1797> 2026-01-22T15:43:14.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:45.147661+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7649 sent 7648 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:14.326091+0000 osd.2 (osd.2) 7649 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1784> 2026-01-22T15:43:15.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7648) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:13.329486+0000 osd.2 (osd.2) 7648 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7649) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:14.326091+0000 osd.2 (osd.2) 7649 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:46.147894+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7650 sent 7649 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:15.370868+0000 osd.2 (osd.2) 7650 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1770> 2026-01-22T15:43:16.403+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:47.148109+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7651 sent 7650 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:16.404088+0000 osd.2 (osd.2) 7651 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1761> 2026-01-22T15:43:17.413+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7650) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:15.370868+0000 osd.2 (osd.2) 7650 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:48.148360+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7652 sent 7651 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:17.414053+0000 osd.2 (osd.2) 7652 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1750> 2026-01-22T15:43:18.424+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7651) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:16.404088+0000 osd.2 (osd.2) 7651 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7652) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:17.414053+0000 osd.2 (osd.2) 7652 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:49.148679+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7653 sent 7652 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:18.424561+0000 osd.2 (osd.2) 7653 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1736> 2026-01-22T15:43:19.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7653) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:18.424561+0000 osd.2 (osd.2) 7653 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:50.148896+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7654 sent 7653 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:19.413172+0000 osd.2 (osd.2) 7654 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1721> 2026-01-22T15:43:20.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7654) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:19.413172+0000 osd.2 (osd.2) 7654 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:51.149079+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7655 sent 7654 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:20.447984+0000 osd.2 (osd.2) 7655 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1710> 2026-01-22T15:43:21.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7655) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:20.447984+0000 osd.2 (osd.2) 7655 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:52.149251+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7656 sent 7655 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:21.408635+0000 osd.2 (osd.2) 7656 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1699> 2026-01-22T15:43:22.386+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7656) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:21.408635+0000 osd.2 (osd.2) 7656 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:53.149521+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7657 sent 7656 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:22.386506+0000 osd.2 (osd.2) 7657 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1688> 2026-01-22T15:43:23.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7657) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:22.386506+0000 osd.2 (osd.2) 7657 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:54.149777+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7658 sent 7657 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:23.354706+0000 osd.2 (osd.2) 7658 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1677> 2026-01-22T15:43:24.378+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7658) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:23.354706+0000 osd.2 (osd.2) 7658 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:55.150090+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7659 sent 7658 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:24.379131+0000 osd.2 (osd.2) 7659 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1663> 2026-01-22T15:43:25.371+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7659) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:24.379131+0000 osd.2 (osd.2) 7659 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:56.150376+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7660 sent 7659 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:25.372765+0000 osd.2 (osd.2) 7660 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1651> 2026-01-22T15:43:26.338+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7660) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:25.372765+0000 osd.2 (osd.2) 7660 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:57.150619+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7661 sent 7660 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:26.339662+0000 osd.2 (osd.2) 7661 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1639> 2026-01-22T15:43:27.388+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7661) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:26.339662+0000 osd.2 (osd.2) 7661 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:58.150911+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7662 sent 7661 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:27.389841+0000 osd.2 (osd.2) 7662 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1628> 2026-01-22T15:43:28.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:42:59.151110+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7663 sent 7662 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:28.374011+0000 osd.2 (osd.2) 7663 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7662) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:27.389841+0000 osd.2 (osd.2) 7662 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1617> 2026-01-22T15:43:29.400+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:00.151364+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7664 sent 7663 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:29.402170+0000 osd.2 (osd.2) 7664 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7663) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:28.374011+0000 osd.2 (osd.2) 7663 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7664) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:29.402170+0000 osd.2 (osd.2) 7664 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1600> 2026-01-22T15:43:30.392+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:01.151608+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7665 sent 7664 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:30.393429+0000 osd.2 (osd.2) 7665 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7665) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:30.393429+0000 osd.2 (osd.2) 7665 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1589> 2026-01-22T15:43:31.394+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:02.151766+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7666 sent 7665 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:31.395648+0000 osd.2 (osd.2) 7666 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7666) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:31.395648+0000 osd.2 (osd.2) 7666 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1578> 2026-01-22T15:43:32.358+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:03.152011+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7667 sent 7666 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:32.359727+0000 osd.2 (osd.2) 7667 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7667) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:32.359727+0000 osd.2 (osd.2) 7667 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1567> 2026-01-22T15:43:33.381+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:04.152278+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7668 sent 7667 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:33.383237+0000 osd.2 (osd.2) 7668 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7668) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:33.383237+0000 osd.2 (osd.2) 7668 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1556> 2026-01-22T15:43:34.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:05.152496+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7669 sent 7668 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:34.395139+0000 osd.2 (osd.2) 7669 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7669) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:34.395139+0000 osd.2 (osd.2) 7669 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1542> 2026-01-22T15:43:35.382+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:06.152781+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7670 sent 7669 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:35.383913+0000 osd.2 (osd.2) 7670 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1532> 2026-01-22T15:43:36.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7670) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:35.383913+0000 osd.2 (osd.2) 7670 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:07.153016+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7671 sent 7670 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:36.374299+0000 osd.2 (osd.2) 7671 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1521> 2026-01-22T15:43:37.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7671) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:36.374299+0000 osd.2 (osd.2) 7671 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:08.153211+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7672 sent 7671 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:37.326417+0000 osd.2 (osd.2) 7672 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1510> 2026-01-22T15:43:38.359+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7672) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:37.326417+0000 osd.2 (osd.2) 7672 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:09.153382+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7673 sent 7672 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:38.360386+0000 osd.2 (osd.2) 7673 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1499> 2026-01-22T15:43:39.379+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,9,22,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7673) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:38.360386+0000 osd.2 (osd.2) 7673 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:10.153610+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7674 sent 7673 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:39.380940+0000 osd.2 (osd.2) 7674 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1484> 2026-01-22T15:43:40.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7674) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:39.380940+0000 osd.2 (osd.2) 7674 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:11.154137+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7675 sent 7674 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:40.356000+0000 osd.2 (osd.2) 7675 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,8,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1472> 2026-01-22T15:43:41.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7675) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:40.356000+0000 osd.2 (osd.2) 7675 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:12.154389+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7676 sent 7675 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:41.355962+0000 osd.2 (osd.2) 7676 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1461> 2026-01-22T15:43:42.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:13.154575+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7677 sent 7676 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:42.331193+0000 osd.2 (osd.2) 7677 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7676) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:41.355962+0000 osd.2 (osd.2) 7676 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1450> 2026-01-22T15:43:43.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:14.154802+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7678 sent 7677 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:43.355963+0000 osd.2 (osd.2) 7678 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7677) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:42.331193+0000 osd.2 (osd.2) 7677 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7678) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:43.355963+0000 osd.2 (osd.2) 7678 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1437> 2026-01-22T15:43:44.402+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:15.155108+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7679 sent 7678 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:44.403770+0000 osd.2 (osd.2) 7679 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7679) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:44.403770+0000 osd.2 (osd.2) 7679 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,8,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1422> 2026-01-22T15:43:45.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:16.155356+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7680 sent 7679 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:45.449131+0000 osd.2 (osd.2) 7680 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1413> 2026-01-22T15:43:46.401+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7680) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:45.449131+0000 osd.2 (osd.2) 7680 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:17.155652+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7681 sent 7680 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:46.403091+0000 osd.2 (osd.2) 7681 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1402> 2026-01-22T15:43:47.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7681) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:46.403091+0000 osd.2 (osd.2) 7681 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,3,9,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:18.155945+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7682 sent 7681 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:47.374114+0000 osd.2 (osd.2) 7682 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1390> 2026-01-22T15:43:48.384+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7682) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:47.374114+0000 osd.2 (osd.2) 7682 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:19.156169+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7683 sent 7682 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:48.386233+0000 osd.2 (osd.2) 7683 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1379> 2026-01-22T15:43:49.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7683) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:48.386233+0000 osd.2 (osd.2) 7683 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:20.156419+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7684 sent 7683 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:49.422208+0000 osd.2 (osd.2) 7684 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1365> 2026-01-22T15:43:50.467+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7684) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:49.422208+0000 osd.2 (osd.2) 7684 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:21.156637+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7685 sent 7684 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:50.467516+0000 osd.2 (osd.2) 7685 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1354> 2026-01-22T15:43:51.492+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,8,9,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:22.156892+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7686 sent 7685 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:51.493258+0000 osd.2 (osd.2) 7686 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7685) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:50.467516+0000 osd.2 (osd.2) 7685 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1342> 2026-01-22T15:43:52.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,8,9,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:23.157143+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7687 sent 7686 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:52.461147+0000 osd.2 (osd.2) 7687 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7686) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:51.493258+0000 osd.2 (osd.2) 7686 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7687) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:52.461147+0000 osd.2 (osd.2) 7687 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1328> 2026-01-22T15:43:53.505+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:24.157338+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7688 sent 7687 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:53.505790+0000 osd.2 (osd.2) 7688 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1319> 2026-01-22T15:43:54.465+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7688) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:53.505790+0000 osd.2 (osd.2) 7688 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:25.157493+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7689 sent 7688 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:54.465804+0000 osd.2 (osd.2) 7689 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1305> 2026-01-22T15:43:55.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7689) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:54.465804+0000 osd.2 (osd.2) 7689 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:26.157729+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7690 sent 7689 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:55.486675+0000 osd.2 (osd.2) 7690 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1294> 2026-01-22T15:43:56.535+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7690) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:55.486675+0000 osd.2 (osd.2) 7690 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:27.157995+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7691 sent 7690 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:56.536042+0000 osd.2 (osd.2) 7691 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1283> 2026-01-22T15:43:57.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,8,9,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7691) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:56.536042+0000 osd.2 (osd.2) 7691 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:28.158401+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7692 sent 7691 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:57.502281+0000 osd.2 (osd.2) 7692 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1271> 2026-01-22T15:43:58.528+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7692) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:57.502281+0000 osd.2 (osd.2) 7692 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:29.158841+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7693 sent 7692 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:58.528615+0000 osd.2 (osd.2) 7693 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1260> 2026-01-22T15:43:59.500+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7693) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:58.528615+0000 osd.2 (osd.2) 7693 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:30.159096+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7694 sent 7693 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:43:59.501049+0000 osd.2 (osd.2) 7694 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,9,9,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1245> 2026-01-22T15:44:00.547+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7694) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:43:59.501049+0000 osd.2 (osd.2) 7694 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,9,9,23,55,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:31.159346+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7695 sent 7694 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:00.547429+0000 osd.2 (osd.2) 7695 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1233> 2026-01-22T15:44:01.532+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7695) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:00.547429+0000 osd.2 (osd.2) 7695 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:32.159542+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7696 sent 7695 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:01.534348+0000 osd.2 (osd.2) 7696 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1222> 2026-01-22T15:44:02.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7696) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:01.534348+0000 osd.2 (osd.2) 7696 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:33.159756+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7697 sent 7696 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:02.530681+0000 osd.2 (osd.2) 7697 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1211> 2026-01-22T15:44:03.520+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7697) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:02.530681+0000 osd.2 (osd.2) 7697 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:34.159939+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7698 sent 7697 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:03.521796+0000 osd.2 (osd.2) 7698 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1200> 2026-01-22T15:44:04.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:35.160245+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7699 sent 7698 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:04.530866+0000 osd.2 (osd.2) 7699 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7698) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:03.521796+0000 osd.2 (osd.2) 7698 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1186> 2026-01-22T15:44:05.515+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:36.160584+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7700 sent 7699 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:05.516976+0000 osd.2 (osd.2) 7700 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1177> 2026-01-22T15:44:06.473+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7699) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:04.530866+0000 osd.2 (osd.2) 7699 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7700) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:05.516976+0000 osd.2 (osd.2) 7700 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:37.160812+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7701 sent 7700 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:06.475080+0000 osd.2 (osd.2) 7701 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7701) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:06.475080+0000 osd.2 (osd.2) 7701 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1160> 2026-01-22T15:44:07.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:38.161083+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7702 sent 7701 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:07.461811+0000 osd.2 (osd.2) 7702 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7702) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:07.461811+0000 osd.2 (osd.2) 7702 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1149> 2026-01-22T15:44:08.472+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:39.161287+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7703 sent 7702 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:08.473677+0000 osd.2 (osd.2) 7703 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1140> 2026-01-22T15:44:09.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7703) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:08.473677+0000 osd.2 (osd.2) 7703 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:40.161574+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7704 sent 7703 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:09.441567+0000 osd.2 (osd.2) 7704 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1126> 2026-01-22T15:44:10.456+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7704) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:09.441567+0000 osd.2 (osd.2) 7704 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:41.161771+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7705 sent 7704 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:10.458105+0000 osd.2 (osd.2) 7705 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1114> 2026-01-22T15:44:11.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7705) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:10.458105+0000 osd.2 (osd.2) 7705 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:42.161956+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7706 sent 7705 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:11.462503+0000 osd.2 (osd.2) 7706 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7706) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:11.462503+0000 osd.2 (osd.2) 7706 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1100> 2026-01-22T15:44:12.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,12,18,60,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:43.162148+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7707 sent 7706 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:12.502445+0000 osd.2 (osd.2) 7707 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1090> 2026-01-22T15:44:13.536+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7707) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:12.502445+0000 osd.2 (osd.2) 7707 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:44.162381+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7708 sent 7707 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:13.537397+0000 osd.2 (osd.2) 7708 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1079> 2026-01-22T15:44:14.526+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7708) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:13.537397+0000 osd.2 (osd.2) 7708 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c338000 session 0x55735c623e00
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735cee9800
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:45.162545+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7709 sent 7708 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:14.528221+0000 osd.2 (osd.2) 7709 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1063> 2026-01-22T15:44:15.488+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7709) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:14.528221+0000 osd.2 (osd.2) 7709 : cluster [WRN] 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:46.182554+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7710 sent 7709 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:15.490006+0000 osd.2 (osd.2) 7710 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1052> 2026-01-22T15:44:16.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7710) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:15.490006+0000 osd.2 (osd.2) 7710 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:47.182876+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7711 sent 7710 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:16.509419+0000 osd.2 (osd.2) 7711 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1041> 2026-01-22T15:44:17.459+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7711) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:16.509419+0000 osd.2 (osd.2) 7711 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:48.183149+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7712 sent 7711 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:17.460512+0000 osd.2 (osd.2) 7712 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,1,7,12,17,61,71,38])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1029> 2026-01-22T15:44:18.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7712) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:17.460512+0000 osd.2 (osd.2) 7712 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:49.183375+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7713 sent 7712 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:18.448911+0000 osd.2 (osd.2) 7713 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1018> 2026-01-22T15:44:19.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7713) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:18.448911+0000 osd.2 (osd.2) 7713 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:50.183599+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7714 sent 7713 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:19.497057+0000 osd.2 (osd.2) 7714 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -1004> 2026-01-22T15:44:20.485+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7714) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:19.497057+0000 osd.2 (osd.2) 7714 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:51.183846+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7715 sent 7714 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:20.486547+0000 osd.2 (osd.2) 7715 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,1,7,12,17,61,70,39])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -992> 2026-01-22T15:44:21.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:52.184116+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7716 sent 7715 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:21.480036+0000 osd.2 (osd.2) 7716 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7715) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:20.486547+0000 osd.2 (osd.2) 7715 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -981> 2026-01-22T15:44:22.510+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:53.184366+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7717 sent 7716 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:22.512054+0000 osd.2 (osd.2) 7717 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -972> 2026-01-22T15:44:23.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7716) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:21.480036+0000 osd.2 (osd.2) 7716 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7717) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:22.512054+0000 osd.2 (osd.2) 7717 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:54.184717+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7718 sent 7717 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:23.507897+0000 osd.2 (osd.2) 7718 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -959> 2026-01-22T15:44:24.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7718) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:23.507897+0000 osd.2 (osd.2) 7718 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:55.184962+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7719 sent 7718 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:24.509392+0000 osd.2 (osd.2) 7719 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -945> 2026-01-22T15:44:25.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7719) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:24.509392+0000 osd.2 (osd.2) 7719 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:56.185838+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7720 sent 7719 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:25.479795+0000 osd.2 (osd.2) 7720 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -934> 2026-01-22T15:44:26.494+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7720) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:25.479795+0000 osd.2 (osd.2) 7720 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,1,7,12,17,60,71,39])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:57.186043+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7721 sent 7720 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:26.495625+0000 osd.2 (osd.2) 7721 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -923> 2026-01-22T15:44:27.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7721) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:26.495625+0000 osd.2 (osd.2) 7721 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:58.186220+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7722 sent 7721 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:27.455858+0000 osd.2 (osd.2) 7722 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -912> 2026-01-22T15:44:28.436+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:43:59.186386+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7723 sent 7722 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:28.437146+0000 osd.2 (osd.2) 7723 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7722) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:27.455858+0000 osd.2 (osd.2) 7722 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -901> 2026-01-22T15:44:29.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:00.186515+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7724 sent 7723 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:29.412470+0000 osd.2 (osd.2) 7724 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7723) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:28.437146+0000 osd.2 (osd.2) 7723 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7724) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:29.412470+0000 osd.2 (osd.2) 7724 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -885> 2026-01-22T15:44:30.435+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:01.186739+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7725 sent 7724 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:30.435665+0000 osd.2 (osd.2) 7725 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -876> 2026-01-22T15:44:31.468+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7725) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:30.435665+0000 osd.2 (osd.2) 7725 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:02.186929+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7726 sent 7725 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:31.469025+0000 osd.2 (osd.2) 7726 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -865> 2026-01-22T15:44:32.430+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7726) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:31.469025+0000 osd.2 (osd.2) 7726 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,1,7,12,16,61,70,40])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:03.187102+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7727 sent 7726 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:32.430534+0000 osd.2 (osd.2) 7727 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -853> 2026-01-22T15:44:33.395+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7727) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:32.430534+0000 osd.2 (osd.2) 7727 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:04.187411+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7728 sent 7727 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:33.395733+0000 osd.2 (osd.2) 7728 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -842> 2026-01-22T15:44:34.368+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7728) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:33.395733+0000 osd.2 (osd.2) 7728 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:05.187647+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7729 sent 7728 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:34.368914+0000 osd.2 (osd.2) 7729 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -831> 2026-01-22T15:44:35.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:06.187834+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7730 sent 7729 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:35.370926+0000 osd.2 (osd.2) 7730 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7729) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:34.368914+0000 osd.2 (osd.2) 7729 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -817> 2026-01-22T15:44:36.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,1,7,12,16,61,70,40])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:07.188251+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7731 sent 7730 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:36.393439+0000 osd.2 (osd.2) 7731 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7730) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:35.370926+0000 osd.2 (osd.2) 7730 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -805> 2026-01-22T15:44:37.387+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:08.188480+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7732 sent 7731 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:37.387515+0000 osd.2 (osd.2) 7732 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,1,7,12,16,61,70,40])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7731) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:36.393439+0000 osd.2 (osd.2) 7731 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7732) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:37.387515+0000 osd.2 (osd.2) 7732 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -791> 2026-01-22T15:44:38.399+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:09.188690+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7733 sent 7732 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:38.399506+0000 osd.2 (osd.2) 7733 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7733) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:38.399506+0000 osd.2 (osd.2) 7733 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -780> 2026-01-22T15:44:39.417+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:10.188905+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7734 sent 7733 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:39.418627+0000 osd.2 (osd.2) 7734 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7734) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:39.418627+0000 osd.2 (osd.2) 7734 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -769> 2026-01-22T15:44:40.376+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:11.189133+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7735 sent 7734 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:40.378275+0000 osd.2 (osd.2) 7735 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7735) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:40.378275+0000 osd.2 (osd.2) 7735 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -755> 2026-01-22T15:44:41.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:12.189388+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7736 sent 7735 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:41.406787+0000 osd.2 (osd.2) 7736 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7736) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:41.406787+0000 osd.2 (osd.2) 7736 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -744> 2026-01-22T15:44:42.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,8,11,17,61,70,40])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:13.189578+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7737 sent 7736 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:42.406823+0000 osd.2 (osd.2) 7737 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -734> 2026-01-22T15:44:43.361+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7737) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:42.406823+0000 osd.2 (osd.2) 7737 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:14.189841+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7738 sent 7737 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:43.362582+0000 osd.2 (osd.2) 7738 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -723> 2026-01-22T15:44:44.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7738) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:43.362582+0000 osd.2 (osd.2) 7738 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:15.190040+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7739 sent 7738 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:44.371406+0000 osd.2 (osd.2) 7739 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -712> 2026-01-22T15:44:45.362+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7739) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:44.371406+0000 osd.2 (osd.2) 7739 : cluster [WRN] 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:16.190241+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7740 sent 7739 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:45.364375+0000 osd.2 (osd.2) 7740 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -698> 2026-01-22T15:44:46.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7740) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:45.364375+0000 osd.2 (osd.2) 7740 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:17.190508+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7741 sent 7740 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:46.355011+0000 osd.2 (osd.2) 7741 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -687> 2026-01-22T15:44:47.366+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:18.190765+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7742 sent 7741 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:47.368162+0000 osd.2 (osd.2) 7742 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -678> 2026-01-22T15:44:48.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7741) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:46.355011+0000 osd.2 (osd.2) 7741 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,17,61,70,40])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:19.191046+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7743 sent 7742 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:48.410535+0000 osd.2 (osd.2) 7743 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -666> 2026-01-22T15:44:49.397+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7742) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:47.368162+0000 osd.2 (osd.2) 7742 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7743) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:48.410535+0000 osd.2 (osd.2) 7743 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:20.191256+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7744 sent 7743 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:49.399145+0000 osd.2 (osd.2) 7744 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -653> 2026-01-22T15:44:50.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7744) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:49.399145+0000 osd.2 (osd.2) 7744 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:21.191755+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7745 sent 7744 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:50.373500+0000 osd.2 (osd.2) 7745 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -639> 2026-01-22T15:44:51.389+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,17,61,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7745) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:50.373500+0000 osd.2 (osd.2) 7745 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:22.191970+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7746 sent 7745 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:51.390671+0000 osd.2 (osd.2) 7746 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -627> 2026-01-22T15:44:52.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7746) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:51.390671+0000 osd.2 (osd.2) 7746 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:23.192184+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7747 sent 7746 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:52.410279+0000 osd.2 (osd.2) 7747 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -616> 2026-01-22T15:44:53.373+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7747) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:52.410279+0000 osd.2 (osd.2) 7747 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:24.192376+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7748 sent 7747 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:53.374443+0000 osd.2 (osd.2) 7748 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -605> 2026-01-22T15:44:54.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,17,61,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7748) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:53.374443+0000 osd.2 (osd.2) 7748 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:25.192543+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7749 sent 7748 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:54.411250+0000 osd.2 (osd.2) 7749 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -593> 2026-01-22T15:44:55.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7749) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:54.411250+0000 osd.2 (osd.2) 7749 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:26.192711+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7750 sent 7749 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:55.407036+0000 osd.2 (osd.2) 7750 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -579> 2026-01-22T15:44:56.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7750) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:55.407036+0000 osd.2 (osd.2) 7750 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:27.192914+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7751 sent 7750 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:56.422792+0000 osd.2 (osd.2) 7751 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -568> 2026-01-22T15:44:57.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7751) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:56.422792+0000 osd.2 (osd.2) 7751 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:28.193145+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7752 sent 7751 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:57.463171+0000 osd.2 (osd.2) 7752 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -557> 2026-01-22T15:44:58.419+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7752) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:57.463171+0000 osd.2 (osd.2) 7752 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:29.193372+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7753 sent 7752 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:58.421238+0000 osd.2 (osd.2) 7753 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -546> 2026-01-22T15:44:59.428+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7753) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:58.421238+0000 osd.2 (osd.2) 7753 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:30.193537+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7754 sent 7753 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:44:59.429389+0000 osd.2 (osd.2) 7754 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -535> 2026-01-22T15:45:00.451+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7754) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:44:59.429389+0000 osd.2 (osd.2) 7754 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:31.193894+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7755 sent 7754 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:00.452404+0000 osd.2 (osd.2) 7755 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -520> 2026-01-22T15:45:01.490+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:32.194064+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7756 sent 7755 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:01.491366+0000 osd.2 (osd.2) 7756 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7755) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:00.452404+0000 osd.2 (osd.2) 7755 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -508> 2026-01-22T15:45:02.443+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:33.194214+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7757 sent 7756 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:02.445125+0000 osd.2 (osd.2) 7757 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7756) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:01.491366+0000 osd.2 (osd.2) 7756 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7757) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:02.445125+0000 osd.2 (osd.2) 7757 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -495> 2026-01-22T15:45:03.433+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:34.194534+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7758 sent 7757 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:03.434956+0000 osd.2 (osd.2) 7758 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7758) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:03.434956+0000 osd.2 (osd.2) 7758 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -484> 2026-01-22T15:45:04.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:35.194814+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7759 sent 7758 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:04.406588+0000 osd.2 (osd.2) 7759 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7759) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:04.406588+0000 osd.2 (osd.2) 7759 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -473> 2026-01-22T15:45:05.418+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:36.195044+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7760 sent 7759 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:05.418755+0000 osd.2 (osd.2) 7760 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7760) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:05.418755+0000 osd.2 (osd.2) 7760 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -459> 2026-01-22T15:45:06.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:37.195279+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7761 sent 7760 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:06.440845+0000 osd.2 (osd.2) 7761 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7761) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:06.440845+0000 osd.2 (osd.2) 7761 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -448> 2026-01-22T15:45:07.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:38.195530+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7762 sent 7761 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:07.478927+0000 osd.2 (osd.2) 7762 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -438> 2026-01-22T15:45:08.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7762) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:07.478927+0000 osd.2 (osd.2) 7762 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:39.195714+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7763 sent 7762 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:08.506897+0000 osd.2 (osd.2) 7763 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -427> 2026-01-22T15:45:09.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7763) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:08.506897+0000 osd.2 (osd.2) 7763 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:40.195902+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7764 sent 7763 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:09.478581+0000 osd.2 (osd.2) 7764 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -416> 2026-01-22T15:45:10.514+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7764) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:09.478581+0000 osd.2 (osd.2) 7764 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:41.196178+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7765 sent 7764 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:10.514406+0000 osd.2 (osd.2) 7765 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -401> 2026-01-22T15:45:11.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7765) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:10.514406+0000 osd.2 (osd.2) 7765 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:42.196389+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7766 sent 7765 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:11.487120+0000 osd.2 (osd.2) 7766 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -390> 2026-01-22T15:45:12.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7766) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:11.487120+0000 osd.2 (osd.2) 7766 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c339800 session 0x55735d10ed20
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: handle_auth_request added challenge on 0x55735c5ca000
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:43.196615+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7767 sent 7766 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:12.522121+0000 osd.2 (osd.2) 7767 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -377> 2026-01-22T15:45:13.502+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7767) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:12.522121+0000 osd.2 (osd.2) 7767 : cluster [WRN] 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:44.196850+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7768 sent 7767 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:13.502489+0000 osd.2 (osd.2) 7768 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -366> 2026-01-22T15:45:14.493+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7768) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:13.502489+0000 osd.2 (osd.2) 7768 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:45.197096+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7769 sent 7768 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:14.493645+0000 osd.2 (osd.2) 7769 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -355> 2026-01-22T15:45:15.543+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7769) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:14.493645+0000 osd.2 (osd.2) 7769 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:46.197367+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7770 sent 7769 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:15.544153+0000 osd.2 (osd.2) 7770 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -341> 2026-01-22T15:45:16.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7770) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:15.544153+0000 osd.2 (osd.2) 7770 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:47.197557+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7771 sent 7770 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:16.495526+0000 osd.2 (osd.2) 7771 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -329> 2026-01-22T15:45:17.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7771) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:16.495526+0000 osd.2 (osd.2) 7771 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:48.197772+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7772 sent 7771 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:17.456358+0000 osd.2 (osd.2) 7772 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -318> 2026-01-22T15:45:18.479+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7772) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:17.456358+0000 osd.2 (osd.2) 7772 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:49.197995+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7773 sent 7772 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:18.481024+0000 osd.2 (osd.2) 7773 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -307> 2026-01-22T15:45:19.439+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:50.198172+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7774 sent 7773 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:19.440877+0000 osd.2 (osd.2) 7774 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7773) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:18.481024+0000 osd.2 (osd.2) 7773 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -296> 2026-01-22T15:45:20.471+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:51.198429+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7775 sent 7774 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:20.472841+0000 osd.2 (osd.2) 7775 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7774) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:19.440877+0000 osd.2 (osd.2) 7774 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7775) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:20.472841+0000 osd.2 (osd.2) 7775 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -280> 2026-01-22T15:45:21.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,4,8,11,16,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:52.198937+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7776 sent 7775 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:21.522854+0000 osd.2 (osd.2) 7776 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -270> 2026-01-22T15:45:22.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7776) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:21.522854+0000 osd.2 (osd.2) 7776 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:53.199138+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7777 sent 7776 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:22.487170+0000 osd.2 (osd.2) 7777 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -259> 2026-01-22T15:45:23.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7777) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:22.487170+0000 osd.2 (osd.2) 7777 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:54.199393+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7778 sent 7777 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:23.509531+0000 osd.2 (osd.2) 7778 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -248> 2026-01-22T15:45:24.498+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7778) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:23.509531+0000 osd.2 (osd.2) 7778 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,4,8,10,17,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:55.199575+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7779 sent 7778 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:24.500030+0000 osd.2 (osd.2) 7779 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -236> 2026-01-22T15:45:25.512+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7779) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:24.500030+0000 osd.2 (osd.2) 7779 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:56.199773+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7780 sent 7779 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:25.514102+0000 osd.2 (osd.2) 7780 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -222> 2026-01-22T15:45:26.653+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:57.199956+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7781 sent 7780 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:26.654492+0000 osd.2 (osd.2) 7781 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7780) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:25.514102+0000 osd.2 (osd.2) 7780 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -211> 2026-01-22T15:45:27.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:58.200203+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7782 sent 7781 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:27.699717+0000 osd.2 (osd.2) 7782 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -202> 2026-01-22T15:45:28.738+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7781) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:26.654492+0000 osd.2 (osd.2) 7781 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7782) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:27.699717+0000 osd.2 (osd.2) 7782 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:44:59.200483+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7783 sent 7782 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:28.739304+0000 osd.2 (osd.2) 7783 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,4,8,10,17,62,69,41])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -188> 2026-01-22T15:45:29.725+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7783) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:28.739304+0000 osd.2 (osd.2) 7783 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:00.200650+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7784 sent 7783 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:29.726551+0000 osd.2 (osd.2) 7784 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -177> 2026-01-22T15:45:30.681+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7784) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:29.726551+0000 osd.2 (osd.2) 7784 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:01.200804+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7785 sent 7784 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:30.683489+0000 osd.2 (osd.2) 7785 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -163> 2026-01-22T15:45:31.693+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7785) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:30.683489+0000 osd.2 (osd.2) 7785 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:02.200995+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7786 sent 7785 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:31.695044+0000 osd.2 (osd.2) 7786 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -152> 2026-01-22T15:45:32.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7786) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:31.695044+0000 osd.2 (osd.2) 7786 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:03.201244+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7787 sent 7786 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:32.652429+0000 osd.2 (osd.2) 7787 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -141> 2026-01-22T15:45:33.627+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:04.201486+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7788 sent 7787 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:33.629112+0000 osd.2 (osd.2) 7788 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7787) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:32.652429+0000 osd.2 (osd.2) 7787 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -130> 2026-01-22T15:45:34.656+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:05.201665+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7789 sent 7788 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:34.657858+0000 osd.2 (osd.2) 7789 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7788) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:33.629112+0000 osd.2 (osd.2) 7788 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7789) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:34.657858+0000 osd.2 (osd.2) 7789 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -116> 2026-01-22T15:45:35.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:06.201852+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7790 sent 7789 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:35.652453+0000 osd.2 (osd.2) 7790 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7790) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:35.652453+0000 osd.2 (osd.2) 7790 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -102> 2026-01-22T15:45:36.648+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:07.202127+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7791 sent 7790 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:36.649647+0000 osd.2 (osd.2) 7791 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7791) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:36.649647+0000 osd.2 (osd.2) 7791 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -91> 2026-01-22T15:45:37.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:08.202620+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7792 sent 7791 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:37.699925+0000 osd.2 (osd.2) 7792 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7792) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:37.699925+0000 osd.2 (osd.2) 7792 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -80> 2026-01-22T15:45:38.745+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:09.202810+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7793 sent 7792 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:38.746513+0000 osd.2 (osd.2) 7793 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -71> 2026-01-22T15:45:39.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:10.203014+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7794 sent 7793 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:39.795358+0000 osd.2 (osd.2) 7794 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7793) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:38.746513+0000 osd.2 (osd.2) 7793 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 15:45:44 compute-2 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 15:45:44 compute-2 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -55> 2026-01-22T15:45:40.815+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:11.203352+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 2 last_log 7795 sent 7794 num 2 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:40.816605+0000 osd.2 (osd.2) 7795 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7794) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:39.795358+0000 osd.2 (osd.2) 7794 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7795) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:40.816605+0000 osd.2 (osd.2) 7795 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -43> 2026-01-22T15:45:41.808+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224100352 unmapped: 9338880 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'config diff' '{prefix=config diff}'
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:12.203539+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7796 sent 7795 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:41.809987+0000 osd.2 (osd.2) 7796 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'config show' '{prefix=config show}'
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7796) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:41.809987+0000 osd.2 (osd.2) 7796 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224288768 unmapped: 9150464 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -22> 2026-01-22T15:45:42.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:13.203731+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7797 sent 7796 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:42.856520+0000 osd.2 (osd.2) 7797 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7797) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:42.856520+0000 osd.2 (osd.2) 7797 : cluster [WRN] 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:    -12> 2026-01-22T15:45:43.825+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224411648 unmapped: 9027584 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: tick
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_tickets
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-22T15:45:14.203932+0000)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  log_queue is 1 last_log 7798 sent 7797 num 1 unsent 1 sending 1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  will send 2026-01-22T15:45:43.827404+0000 osd.2 (osd.2) 7798 : cluster [WRN] 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: monclient: _send_mon_message to mon.compute-2 at v2:192.168.122.102:3300/0
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client handle_log_ack log(last 7798) v1
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_client  logged 2026-01-22T15:45:43.827404+0000 osd.2 (osd.2) 7798 : cluster [WRN] 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:44 compute-2 ceph-osd[79779]: do_command 'log dump' '{prefix=log dump}'
Jan 22 15:45:44 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:44 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:44 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:44.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 15:45:45 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/441006344' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 15:45:45 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/274536513' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.18789 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.27731 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.28843 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.18795 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.27743 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.28858 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: pgmap v4162: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.18813 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/527252969' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3267566899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1922483207' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/441006344' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2074599857' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/192074747' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2776501217' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2598294737' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/274536513' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 15:45:45 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:45 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:45 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:45.849+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:45 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:45 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:45 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:45 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 15:45:45 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/786694620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:46 compute-2 crontab[293550]: (root) LIST (root)
Jan 22 15:45:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 15:45:46 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2361371598' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 22 15:45:46 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1078206682' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:46 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 22 15:45:46 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1216321723' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:46 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:46 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:46 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:46.845+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.28879 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.27764 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.28897 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.18837 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.28912 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3308796185' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1535643559' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2321101815' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.28924 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/786694620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/21857564' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/449513707' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2270236522' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.28936 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2361371598' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1364877196' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: pgmap v4163: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1078206682' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/4187776781' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1837579571' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/892894112' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:47 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1216321723' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 15:45:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:45:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 22 15:45:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:45:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 22 15:45:47 compute-2 ovn_metadata_agent[143492]: 2026-01-22 15:45:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 22 15:45:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:47.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:47 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:47 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:47.844+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:47 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:47 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 22 15:45:47 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1073887977' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:47 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:47 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:47 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.28951 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3353346489' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/327179314' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2477755921' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/700747628' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.28969 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4260425190' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3986922559' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/715362271' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/536748687' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/4268155440' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1073887977' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1452673441' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1543650651' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 22 15:45:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1048087941' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 22 15:45:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4074908786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 22 15:45:48 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4169395491' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:48 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:48 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:48 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:48.830+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 22 15:45:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3388168716' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.27863 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.28999 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.27890 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3410981744' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1842434873' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2772120358' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1048087941' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: pgmap v4164: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4074908786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4169395491' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2500690000' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3352944826' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3388168716' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 22 15:45:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1693688001' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 22 15:45:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1153195434' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:49 compute-2 systemd[1]: Starting Hostname Service...
Jan 22 15:45:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 22 15:45:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2448017329' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:49 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 22 15:45:49 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2247528220' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:49 compute-2 systemd[1]: Started Hostname Service.
Jan 22 15:45:49 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:49 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:49 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:49.858+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:49 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:49 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:49 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3813666031' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.18954 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.27920 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.18948 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.18969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.27935 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.18975 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1693688001' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.18984 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1153195434' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3017593338' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2448017329' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2247528220' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2359320859' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1925810316' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3813666031' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1703807545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1371623889' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/317296694' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:50 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:50 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:50.813+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/917443970' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:50 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 15:45:50 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4082909003' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.19002 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: Health check update: 212 slow ops, oldest one blocked for 7738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1703807545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.19026 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.27989 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: pgmap v4165: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/473160357' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2909934921' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1371623889' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/317296694' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/917443970' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4082909003' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1058460210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:51 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:51 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:51 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:51.766+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:51 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:51 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:51 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:51 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:51 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:52 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.19044 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.28001 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.19071 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.28025 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.29134 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.29140 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2695759947' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/47089867' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:52 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:52 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 22 15:45:52 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4237834724' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:52 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:52 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:52.814+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:52 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 22 15:45:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3956659008' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.29146 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.29152 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.29161 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.28082 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.29191 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:53 compute-2 ceph-mon[77081]: pgmap v4166: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3034816724' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4237834724' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3665761610' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3037175596' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3956659008' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 15:45:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:53.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 22 15:45:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2645925705' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:53.829+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:53 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:53 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:53 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 22 15:45:53 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4060242858' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:53 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:53 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:53 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.29209 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.29224 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.19152 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/38607233' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.29239 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2645925705' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/3424748525' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/1398070699' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/4060242858' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1133738816' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:54 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:54 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:54.873+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:54 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:54 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 22 15:45:55 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3195370970' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 15:45:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:55.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='client.28142 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: pgmap v4167: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4020820940' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3445601575' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 15:45:55 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1643388399' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/357813982' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3195370970' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 15:45:55 compute-2 ceph-mon[77081]: Health check update: 212 slow ops, oldest one blocked for 7743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 15:45:55 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:55.870+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:55 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:55 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:55 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 22 15:45:55 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1271623304' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:55 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:55 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:55 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 22 15:45:56 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2223392803' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.19200 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.28178 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.29308 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/706488296' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/4045185106' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1271623304' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/1376995154' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/2223392803' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 22 15:45:56 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3023727315' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:56 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:56.884+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:56 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:56 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:57 compute-2 podman[295071]: 2026-01-22 15:45:57.148557003 +0000 UTC m=+0.188369860 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 15:45:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 15:45:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.28196 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:57 compute-2 ceph-mon[77081]: pgmap v4168: 305 pgs: 2 active+clean+laggy, 303 active+clean; 882 MiB data, 652 MiB used, 20 GiB / 21 GiB avail
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.28202 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/3023727315' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/2389585643' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 22 15:45:57 compute-2 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.100:0/2138983963' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.102:0/1648956903' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 22 15:45:57 compute-2 ceph-mon[77081]: from='client.? 192.168.122.101:0/3393666327' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 22 15:45:57 compute-2 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:57.916+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:57 compute-2 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 15:45:57 compute-2 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 15:45:57 compute-2 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 15:45:57 compute-2 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 15:45:57 compute-2 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:57.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 15:45:57 compute-2 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 22 15:45:57 compute-2 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2512713313' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
